Artificial Intelligence in Medicine - Ethical Issues: Past, Present, & Future Tense
February 26, 2024February 21, 2024
Bonnie Kaplan, PhD, FACMI
Artificial Intelligence in Medicine - Ethical Issues: Past, Present, & Future Tense
Yale Department of Biostatistics (Health Informatics)Yale Bioethics Center ScholarYale Information Society Project Faculty Affiliated Fellow, Yale Solomon Center for Health Law & Policy Faculty Affiliate, Affiliate, Center for Biomedical Data Science, and Program for Biomedical Ethics, Yale University
Sponsored by the Program for Biomedical Ethics
Information
- ID
- 11359
- To Cite
- DCA Citation Guide
Transcript
- 00:00OK. Good evening. Let's get started.
- 00:02Thank you so much for being here.
- 00:05My name is Mark Mercurio.
- 00:06I'm the director of the
- 00:07Program for Biomedical Ethics.
- 00:08And on behalf of our leadership,
- 00:10that would be Jack Hughes and Sarah Hull
- 00:12and Jen Miller and our manager, Karen Kolb.
- 00:14I welcome you here tonight.
- 00:16I think many of you are aware
- 00:18we do this twice a month.
- 00:19The schedule is online
- 00:21Biomedical Ethics at Yale.
- 00:23You can't miss us,
- 00:24and you can always just reach out to to
- 00:26Karen Colborne myself to fill you in.
- 00:28We've been planning this session
- 00:29for a while and I'm.
- 00:30I'm very happy that it's here.
- 00:31I've been wanting to do more for us with
- 00:34this group on artificial intelligence,
- 00:35specifically the ethics of artificial
- 00:37intelligence and its use in medicine.
- 00:40And the good news is we have someone with
- 00:42expertise in this matter right here at home,
- 00:45our own Bonnie Kaplan,
- 00:47who who comes to us educated from Cornell,
- 00:50as well as her master's and PHDI believe,
- 00:52from University of Chicago.
- 00:53She's on our faculty in Biostatistics.
- 00:56She's involved with ethics at the
- 00:58Solomon Center at the law School,
- 00:59with the Bioethics Center on the main campus.
- 01:02She's,
- 01:02of course,
- 01:02a member of our affiliated faculty in
- 01:04the biomedical ethics program here.
- 01:06She's a leader in informatics and
- 01:09issues of telemedicine and technology
- 01:10on a national and international level.
- 01:12And she's been kind enough to connect me
- 01:14to some of those folks over the years.
- 01:17She also has a particular interest
- 01:18in artificial intelligence.
- 01:19And so I was so pleased when
- 01:21she kindly agreed to spend the
- 01:22evening talking to us about that.
- 01:24So what we're going to do and the
- 01:26many of you are familiar with this
- 01:27is for the next 50 minutes or so,
- 01:29Bonnie's going to speak to us and
- 01:30give us a PowerPoint presentation.
- 01:32And after that, we'll have a conversation.
- 01:34And folks online as well as
- 01:36folks in the room are welcome to
- 01:37address questions to Bonnie or to
- 01:39have comments and conversation.
- 01:40And as you know, as always,
- 01:42we will wrap up at 6:30.
- 01:45So with no further ado,
- 01:47I want to introduce my friend
- 01:49and our affiliated faculty, Dr.
- 01:50Bonnie Kaplan.
- 01:59And
- 02:05let me say that publicly.
- 02:06Thank you very much, Doctor Mercurio,
- 02:08for inviting me and for that very
- 02:11welcome and generous introduction.
- 02:13Thank you everyone who's here virtually
- 02:15and in person for coming to hear
- 02:18more about artificial intelligence
- 02:20in medicine and some of the ethical,
- 02:22legal and social issues that have come
- 02:24up over the many years we've been
- 02:26using it and are coming up now as well.
- 02:31I have no conflicts of interest to declare,
- 02:36no funding to declare.
- 02:41I'm going to meet the objectives of
- 02:43this talk that you all saw by raising
- 02:46questions and issues to be considered.
- 02:48I'm not going to answer them because
- 02:50there aren't any easy answers here.
- 02:52I'm going to try to separate out
- 02:54the ethical and legal issues.
- 02:56The social issues run through all of those,
- 02:58but all of them are interconnected
- 03:01and I am going to focus on now and
- 03:05before now instead of the far future
- 03:07or maybe not so far future of issues
- 03:11having to do with super intelligence
- 03:13and moral agency and of various sorts
- 03:17of potential artificial intelligences.
- 03:20I'm going to focus on clinical.
- 03:22There are uses that are not
- 03:24entirely clinical.
- 03:25I'm not going to talk about
- 03:27all of AI in medicine.
- 03:29We would be here for the next several years.
- 03:31I am not even going to talk therefore about
- 03:34all of information technology and medicine,
- 03:36although many of the issues
- 03:38cross cut all of those.
- 03:40So I'm going to survey a lot of different
- 03:44things with the goal of encouraging
- 03:47and hoping that we'll all be thinking
- 03:49much more about these kinds of issues,
- 03:51that we will be thinking much
- 03:53more broadly than most people
- 03:55have been thinking so far.
- 03:56And so that we can make wise decisions
- 03:58about how it should be used in healthcare.
- 04:03A quick review of just where we've
- 04:05come over the past seventy years or
- 04:08so shows that we've had many, many,
- 04:10many ways that artificial intelligence
- 04:13is being used in medicine today.
- 04:15Those of you who are in practice
- 04:17have probably seen some of these,
- 04:19those of you who are patients
- 04:21have seen some of these.
- 04:22So that's pretty much everybody here.
- 04:24Even if you don't know that,
- 04:26that's what artificial intelligence
- 04:28is behind these things because until
- 04:30very recently we haven't much called it
- 04:33artificial intelligence and except very
- 04:36early and what's mainly being called
- 04:40now artificial intelligence are the
- 04:42machine learning ChatGPT sort of programs.
- 04:45But clinical decision support systems
- 04:46which we've had being developed over many,
- 04:49many years are also artificial intelligence.
- 04:52They include algorithmic scoring systems,
- 04:54which almost anybody who's doing clinical
- 04:56work is going to be familiar with.
- 04:58Treatment guidelines,
- 04:59drug drug interactions and alerts.
- 05:02If you are giving a patient a prescription
- 05:05that they may be allergic to reminders,
- 05:08they can be very simple,
- 05:09like don't forget to vaccinate this patient.
- 05:12Or much more sophisticated,
- 05:14they could be having to
- 05:16do with image analysis.
- 05:18Diagnostic image analysis for an
- 05:21EKGS on retinal scans, on mammograms,
- 05:24on colonoscopies,
- 05:24on all kinds of images is going on.
- 05:28We have monitoring and alert systems
- 05:32where you're monitoring patients
- 05:35in the hospital and where you're
- 05:38monitoring patients at home through
- 05:40various kinds of sensors and machinery.
- 05:42We also have devices that do automatic
- 05:45monitoring and regulation of things like
- 05:48insulin in automatic insulin pumps.
- 05:50And this is wonderful.
- 05:51Imagine if you're a diabetic,
- 05:52you don't have to constantly take your
- 05:55sugar levels and make adjustments.
- 05:56We have programs that issue TRIG
- 06:01alerts such as impending work
- 06:04that's or currently done.
- 06:06Very promising work on isn't epileptic
- 06:09seizure coming or older work that says you,
- 06:15Bonnie may be walking into an area
- 06:18that's going to trigger your asthma.
- 06:20So all kinds of these sorts of
- 06:23things are happening and we have
- 06:25health behavior counseling programs
- 06:26that's been going on a long time.
- 06:29They can help the clinicians counsel
- 06:31patients by providing guidelines for
- 06:33that or they can be directly used by
- 06:36patients such as a system I worked on
- 06:39which had telephone based AI counseling
- 06:41for diet and exercise behavior.
- 06:44We have language processing,
- 06:46voice recognition for dictation.
- 06:49It's gotten better and better and better.
- 06:51We've had natural language processing for
- 06:54analyzing the text and patient records.
- 06:56We have automated scribing so
- 07:00that if you're sort of listening
- 07:02in on a clinical encounter,
- 07:05notes get generated automatically.
- 07:07We have resource embed algorithms
- 07:09and predictive algorithms for
- 07:14doing organ allocation for transplant people
- 07:20and I lost the mouse, excuse me,
- 07:25we have ICU bed allocation
- 07:28algorithms and that is really
- 07:31came to the fore and was very,
- 07:33very helpful during the
- 07:35peak of the COVID pandemic.
- 07:38When you have scarce resources,
- 07:40how is the wisest way to allocate them?
- 07:42I'm OK, thank you.
- 07:45Yeah, we have the quantified self
- 07:48movement that's been going on now
- 07:50for a while where individuals are
- 07:53taking responsibility and it's keen
- 07:55interest in all of their body functions
- 07:58and monitoring their physiological
- 07:59science and keeping track of that.
- 08:01So they're wonderful developments
- 08:03that are happening that are providing
- 08:06lots of help and lots of guidance and
- 08:09lots of improvements in healthcare.
- 08:19A lot has been going on recently,
- 08:21much more since the introduction of ChatGPT.
- 08:26That caused an awful lot of
- 08:30excitement. Rightly so.
- 08:31It burst on the scene almost
- 08:3512 months ago, not quite.
- 08:37And there's tremendous amount
- 08:39going on here at Yale since then
- 08:43that's now called explicitly AI.
- 08:45They just had a conference on this
- 08:47where a number of the projects were
- 08:50highlighted and I've listed them here.
- 08:52But the main point is there's a lot going on,
- 08:54a lot of programs happening
- 08:56all over on many campuses,
- 08:58including here.
- 09:05How did we get here?
- 09:06So I want to take a look at what we've
- 09:09seen in the past and a number of the
- 09:11kinds of issues that have come up in
- 09:13the past that are still prevalent today
- 09:16in somewhat different incarnations,
- 09:18and then some of the newer issues
- 09:21that we're seeing coming up.
- 09:23AI has been part of medicine for a long time.
- 09:28Alan Turing's paper from 1950 raised a
- 09:32question of are computers intelligent?
- 09:35They were already out there.
- 09:37They were already being touted
- 09:38as giant brains.
- 09:39And this has been a question
- 09:41we've had all along.
- 09:42They're talking here about digital computers.
- 09:44There were other kinds of
- 09:46computers before that.
- 09:47In the 1950s,
- 09:49there was enough happening in
- 09:52medicine that we had books coming out,
- 09:56compiling various key contributions
- 09:58and papers.
- 09:59At the end of the decade we had a conference,
- 10:02a major conference that included work
- 10:05on artificial intelligence such as
- 10:08image analysis and pattern recognition
- 10:10for pathology and for radiology,
- 10:12and Ledley and Lesstedt's very
- 10:15influential 1959 work on using Bayesian
- 10:19analysis for differential diagnosis.
- 10:25I'm sorry I went backwards.
- 10:26Let's go forwards. OK?
- 10:28Of course I've arbitrarily
- 10:30divided this into decades
- 10:34because the work just continues
- 10:36all along and takes different
- 10:39forms at different periods.
- 10:40So by the 1960s we already had built
- 10:44in to various sorts of record systems
- 10:48what was called computer diagnosis
- 10:50and then computer assisted diagnosis.
- 10:55Larry weeds pomr from problem
- 10:57oriented medical record.
- 10:59He which he developed he then made into
- 11:03a computer based system called Promise
- 11:05which included clinical guidelines.
- 11:08The work done at what's now
- 11:10Intermountain Healthcare by Homer
- 11:12Warner also included that sort of work.
- 11:16Very influential projects going on at
- 11:18Stanford in what was called the sumac.
- 11:20Same project we had work also
- 11:22going on outside of medicine
- 11:24in artificial intelligence.
- 11:26Just as one example that got brought
- 11:29into medicine is the work that was
- 11:32done by Joseph Weisenbaum at MIT as
- 11:35an AI project to simulate Rogerian
- 11:38psychotherapy and that became promoted
- 11:40for ways to treat psychiatric patients.
- 11:43That's a program you may know as Eliza,
- 11:46you may know as Doctor,
- 11:48and many people have used that program
- 11:51knowing that it's computer based.
- 11:54Now they're using chat bots in the 1970s.
- 11:59We have somewhat of a change in the
- 12:02way the technology is done and the
- 12:05development that of course started
- 12:07in the 60s called expert systems.
- 12:10These are a little bit different because
- 12:12they are in more limited domains.
- 12:14So the very well known mycin system that
- 12:18Ted Shortliffe developed at Stanford in
- 12:211972 is when it first hit the scene,
- 12:25recommended antibiotics for viral infections,
- 12:28bacterial infections, I'm sorry.
- 12:32Here at Yale,
- 12:33some of you may have known Doctor Miller,
- 12:35Perry Miller.
- 12:36Perry developed the PUFF program
- 12:39for anesthesiology administration.
- 12:41Another expert system.
- 12:42Work in these systems continued
- 12:44for some while.
- 12:46Perry went on to found the Yale
- 12:48Center for Medical Informatics,
- 12:49which I was part of for many, many years.
- 12:52It's now been rolled into the new
- 12:54programs here at Yale under the
- 12:56direction of the new position of the
- 12:59new deanship of Doctor Lucilla Ono
- 13:01Machado in Biomedical Informatics.
- 13:04So we see legacies here,
- 13:07even if you only look locally,
- 13:09but all across international.
- 13:10And I'm only focusing on US here,
- 13:12but this is going on all over many,
- 13:15many places.
- 13:16We also have by the end of this decade,
- 13:20the formation of a number of
- 13:21different journals and a number of
- 13:24different conferences in this field.
- 13:26A number of those verged later into
- 13:28big one big gigantic conference.
- 13:31So they'll be seeing during the time
- 13:34that we'd finally get to the end of the 80s,
- 13:37lots of developments that are now
- 13:39the way we're thinking about the
- 13:41use of information technology and
- 13:43artificial intelligence and medicine.
- 13:48Note the change in terminology
- 13:50because now they're being called
- 13:52clinical decision support systems.
- 13:54Those changes in terminology
- 13:56have ethical implications that
- 13:58you may want to think about.
- 14:02Scoring has gotten automated,
- 14:03used to not have to be automated.
- 14:06We have work from that
- 14:07published in the early 80s.
- 14:09We have counseling systems and so on.
- 14:12I've talked some about those already.
- 14:15They come out of this period,
- 14:18and we are starting now to now,
- 14:21in this period to get more
- 14:23attention paid to ethical issues,
- 14:25explicitly called out as ethical issues.
- 14:28They've been there all along.
- 14:30We've known about them all along.
- 14:31We've talked about them all along,
- 14:33but now they're getting
- 14:35labeled ethical issues.
- 14:36So we have new books coming out.
- 14:40We have new papers coming out.
- 14:42Ken Goodman,
- 14:43who was here at the beginning of this year,
- 14:45put together the first book on ethical
- 14:50issues in computing in medicine,
- 14:54including on ethical issues having
- 14:57to do with clinical decision support
- 14:59and algorithmic scoring and so on.
- 15:01Ken went on to found the Ethical,
- 15:03Legal and Social Issues Working
- 15:05Group within the American
- 15:07Medical Informatics Association.
- 15:08AMIA resulted from the merging
- 15:10of all those organizations.
- 15:11I told you before,
- 15:12it's now one of the premier conferences,
- 15:15scientific conferences in the field.
- 15:17Thousands of people come and the
- 15:18LC Working Group is a respected
- 15:20part of that organization,
- 15:22and ethical,
- 15:23legal and social issues are part of
- 15:25the discussion in medical informatics.
- 15:27So what are some of those concerns
- 15:29that have been there all along and
- 15:32how are they getting amplified now?
- 15:34And what new concerns do we
- 15:36need to think about?
- 15:37Well, just as a review,
- 15:40we have again many,
- 15:41many wonderful things going on.
- 15:43Clinical decision support systems have
- 15:45been part of the medical records,
- 15:47your electronic health record
- 15:48systems for some while they were
- 15:51mandated to be included in there
- 15:53by the High Tech Act of 2009.
- 15:55So they're there.
- 15:59And AI has had many meanings over the
- 16:03years then and many incarnations,
- 16:05some of it being used by physicians,
- 16:08some of it being used by other
- 16:10people clinically,
- 16:10some of it being used by patients,
- 16:13some of it commercially available,
- 16:15some of it only inpatient,
- 16:17some of it out there.
- 16:19And social media, we're surrounded by it.
- 16:23And all along,
- 16:25we've had visions that say it's
- 16:28going to make medicine more humane.
- 16:31It's going to make it more scientific.
- 16:33It's going to free physicians so
- 16:34that they can spend more time with
- 16:37patients and provide better care.
- 16:38It's going to reduce costs.
- 16:40It's going to improve quality.
- 16:42And that will lead to much better care,
- 16:44much better outcomes, much better lives,
- 16:46and a great new utopian world of medicine.
- 16:49And we have had some of that.
- 16:52And then we have dystopian views.
- 16:54The other side of that,
- 16:55it's going to replace the doctor.
- 16:57It's going to make medicine
- 16:59inhumane and mechanical.
- 17:00It's going to lose the art and personal
- 17:02touch that are so vital to good health care.
- 17:05Some of that has happened.
- 17:07So we're getting some of both.
- 17:09And we're not going to have
- 17:11either of these two extremes,
- 17:12I think.
- 17:15But we do need to pay attention to what
- 17:17kinds of issues come up from both of these
- 17:20visions and all the visions in between.
- 17:28Hello. OK, so some of what's going on
- 17:33since the get is that in the 1950s when you
- 17:37already had books coming out in this field,
- 17:40you also had a lot of concern coming out
- 17:44saying how come we're not using computers
- 17:46the way we should be in medicine,
- 17:48How come other domains are using
- 17:50them much more than we are?
- 17:51Why is there a lag in medical computing?
- 17:54This has been a concern all along.
- 17:57It's been addressed in many different ways.
- 18:00And today, the kinds of things
- 18:03were being talked about would be
- 18:05called the need for explainability,
- 18:07the need for trustworthiness,
- 18:09issues having to do with how these
- 18:11systems are implemented and fit in with
- 18:14clinical workflow and what people's
- 18:16and how they fit in with what people
- 18:19think their jobs are clinically and
- 18:22the values that come along with those
- 18:24conceptions of what your work is and
- 18:26what it should be.
- 18:27So let me talk a little bit about
- 18:30clinical practice changes here.
- 18:32This lag issue has been around,
- 18:34as I said, a long time.
- 18:35I did my doctoral dissertation looking at it,
- 18:39it's still there, only maybe not
- 18:42talked about quite the same way.
- 18:45For some people,
- 18:48clinical guidelines feel mechanical,
- 18:50restricting and challenges to their
- 18:53professional knowledge and authority.
- 18:55For others, they're wonderful.
- 18:56They help you give better care.
- 18:58They remind you of things you
- 18:59might have not thought about.
- 19:00They tell you things that you maybe
- 19:02didn't know, so you've had all along.
- 19:05That kind of balancing between how
- 19:09much is science, how much is art,
- 19:12how much do we need,
- 19:13how much is a dehumanizing,
- 19:16how much does it really help us
- 19:18provide better, better care?
- 19:20The same system has different reactions
- 19:24provoked in different people.
- 19:27Some of them relate to it one way,
- 19:30some of them relate to it another way,
- 19:32and a lot of that,
- 19:33again,
- 19:33depends on how they see their jobs and
- 19:36how they put values into what they're doing.
- 19:40This cuts across all sorts of clinical work.
- 19:43I saw this in a study I did of
- 19:46laboratory technologists in the
- 19:481980s when we were putting in a
- 19:50laboratory information system.
- 19:54A different issue having to do with
- 19:57clinical practice alert fatigue.
- 19:58I'm sure those of you who are
- 19:59doing clinical practice are well
- 20:01aware of what I'm talking about.
- 20:02Too many alerts. You start to
- 20:04feel like the boy who cried wolf,
- 20:07and you don't pay attention to any of them,
- 20:09including the real important ones,
- 20:11and they're distracting and annoying.
- 20:13And who needs them?
- 20:14So alert fatigue is an issue having
- 20:17to do with how do we make sure that
- 20:20we don't have alerts that are?
- 20:22Not that helpful.
- 20:23And we do have alerts that are so
- 20:25that they really do improve practice.
- 20:28We have workflow issues.
- 20:30How you treat people who do the work,
- 20:33no matter who they are in the clinical
- 20:37hierarchy makes a difference ethically
- 20:40because you want to treat people well.
- 20:43And it certainly makes a difference
- 20:45in the kind of care that's delivered
- 20:47because people who are not feeling well
- 20:49treated are not going to do their best.
- 20:54So there are concerns here and their
- 20:57concerns as well about changing
- 20:59roles and relationships because of
- 21:01all of this that we now have had
- 21:04promoted for maybe 25 years or so.
- 21:07The clinician patient computer partnership.
- 21:11Well, what happens when you introduce
- 21:13the computer into these partnerships?
- 21:16How does that change the relationships
- 21:19between clinicians and clinicians,
- 21:21clinicians and patients?
- 21:23What happens when a patient goes and
- 21:27the only gets to see the back of your
- 21:29head when you're typing in notes?
- 21:30What happens when instead of
- 21:35having radiology conferences,
- 21:37the person who ordered the radiology
- 21:40film is only getting readings
- 21:42coming at them through the computer
- 21:44and you lose the back and forth?
- 21:46That happens with a radiology conference.
- 21:49How did that patient feel on palpation?
- 21:51What were the lab results?
- 21:53What do you think it is?
- 21:54Well, my gut reaction is it's that,
- 21:57but I can't say that because
- 21:59it's not clinically diagnostic.
- 22:00If all the communication is being
- 22:03done through the record or a lot
- 22:06of the communication is being
- 22:07done through the record,
- 22:08you lose those interactions,
- 22:10you lose those valuable encounters
- 22:12and you end up treating the
- 22:14data instead of the patient.
- 22:17And sometimes that's a
- 22:18very helpful thing to see,
- 22:20and sometimes it's really detrimental.
- 22:25You have the same kinds of
- 22:26problems with the scoring systems.
- 22:28You want to pay attention
- 22:30not just to the outputs,
- 22:32but to how they relate to this particular
- 22:35patient in these particular circumstances.
- 22:38Your knowledge clinically,
- 22:39if you are a clinician,
- 22:41your values, if you're a patient,
- 22:43should all be incorporated into the kinds
- 22:46of decisions and care that's provided.
- 22:49Again, patients see different
- 22:51systems differently too. I'm sorry.
- 22:53Same systems differently too.
- 22:55So some may consider getting reminders.
- 22:58Take your medicine as well.
- 23:01That's really helpful.
- 23:02I would have forgotten otherwise,
- 23:03or boy, I don't want to kindly
- 23:05be reminded that I'm sick.
- 23:07I wish it would leave me alone.
- 23:09And we've seen those kinds of
- 23:11results on studies over the years.
- 23:17All of this development,
- 23:19all of what's going on in machine
- 23:22learning and big data come from
- 23:25tremendous amounts of data collection.
- 23:27So I want to turn now to some of
- 23:29the data issues that have come up.
- 23:31We've had widespread data
- 23:33collection for over 20 years now.
- 23:35Concern about data collection going back to
- 23:37the at least the 1960s and large databases,
- 23:40important legal work being
- 23:42done that motivated eventually
- 23:44the development of HIPAA,
- 23:46businesses, healthcare,
- 23:48government researchers.
- 23:49Everybody's collecting data,
- 23:51they are aggregating this data,
- 23:53they are selling this data,
- 23:54they're putting it together in all
- 23:56kinds of interesting ways and doing
- 23:58all sorts of analysis on this data.
- 23:59And these analysis are proving to be
- 24:02very valuable and helpful in a number
- 24:04of ways and they're also proving to
- 24:07be problematic in a number of ways.
- 24:09So this is the reason why we
- 24:11have the cost of the value.
- 24:12You can have various government programs.
- 24:15We're collecting data, NIH,
- 24:17big data to knowledge or all
- 24:21of us collecting data.
- 24:23We have the National Health
- 24:24Service doing this.
- 24:25As I said,
- 24:26I'm focusing on the United States,
- 24:27but this is going on all over.
- 24:29All of the things I'm talking
- 24:31about are going on all over.
- 24:32And we have data being collected
- 24:35by social media companies.
- 24:37We have data being extracted
- 24:40from your mobile phones.
- 24:42We have data everywhere and all
- 24:45of this is getting put together
- 24:49and used by partnerships between
- 24:52health tech companies and healthcare
- 24:54organizations or companies that
- 24:56want to get into healthcare.
- 24:58And some of it is giving us really
- 25:01interesting important results,
- 25:02such as being able to identify
- 25:04various biomarkers that we might
- 25:06not have known about before.
- 25:07And that's very valuable.
- 25:14Some of it is raising other problems.
- 25:17When we have real world data,
- 25:20as this is now called,
- 25:22you can easily, more easily look
- 25:26at social determinants of health.
- 25:27You can figure out connections that
- 25:29are really important for trying to
- 25:32address whatever this patient's issues
- 25:35and problems and diseases may be.
- 25:38But it also makes all data health data.
- 25:42And that raises the question of
- 25:45do we always want to not have
- 25:47a division between the vision,
- 25:49illness and health?
- 25:50How much do you want to think in terms
- 25:52of their sickness along a big spectrum?
- 25:55Here The data, of course,
- 25:59is decontextualized.
- 26:01We don't know under what circumstances
- 26:02it's all been collected.
- 26:04We don't know what else is going on in
- 26:05that person's life that may be important.
- 26:07We don't know why they did Whatever
- 26:08it is that's in the data,
- 26:10we just don't know an awful lot of
- 26:12things that could be very relevant
- 26:14for trying to understand that data.
- 26:17This leads to all sorts of issues
- 26:19having to do with data quality.
- 26:21Even assuming that we knew all that,
- 26:24which we don't,
- 26:26we do know that is under reporting,
- 26:29people who are concerned about stigma
- 26:32are going to be more careful about
- 26:34data and so will clinical people
- 26:36be more careful about that data.
- 26:39We have up coding and down coding
- 26:41for billing purposes.
- 26:42We have
- 26:45people who are not included in the data.
- 26:47Not everybody goes for healthcare.
- 26:49Not everybody has a mobile device.
- 26:51Not everybody's using Facebook,
- 26:54thank goodness.
- 26:55So we certainly don't have
- 26:58everybody in the data.
- 27:00We have data that is affected
- 27:03by what's called workarounds.
- 27:05This is where if you encounter in
- 27:08whatever computer system you're using,
- 27:11something, that won't let you get
- 27:12past a certain point unless you put
- 27:14in certain kinds of information.
- 27:16So for example, in another study that I did,
- 27:18I watched as physicians sat there
- 27:21and looked and said, you know,
- 27:24it's telling me that my diagnosis
- 27:26doesn't match the symptoms.
- 27:28Well, I know this patient has this diagnosis,
- 27:32but there's only four of these
- 27:33symptoms that the patient has
- 27:35instead of all five of them.
- 27:36So I'll just put in the fifth one
- 27:38because I can't get past this point.
- 27:40OK, That's a workaround.
- 27:42It says something about
- 27:43how you design systems.
- 27:45It says something about
- 27:47the quality of the data.
- 27:49It says something about the ethics
- 27:50of that design and the ethics
- 27:52of the physician's behavior.
- 27:54So a number of issues here,
- 27:56OK,
- 27:57we have billing codes being used
- 27:59for analysis for clinical purposes
- 28:02instead of diagnostic codes.
- 28:05We have errors.
- 28:06Of course everybody makes mistakes,
- 28:08so you'd expect there to be mistakes
- 28:11in the data too when you train systems.
- 28:14This is happening in machine
- 28:16learning systems.
- 28:18On all that data,
- 28:19you may be amplifying some of these problems,
- 28:22leading to inaccuracies,
- 28:23leading to biases and so on,
- 28:25because you keep reusing the results
- 28:27over and over and over again.
- 28:29And that makes the problems
- 28:31there even more than it was.
- 28:38So having tech companies do this,
- 28:40having researchers do this,
- 28:42is it a treasure trove for research?
- 28:45It is. Is it a privacy nightmare?
- 28:48It is. And that again gets into
- 28:52how do you judge which of these
- 28:55is a problem and which of these is
- 28:57helpful and what you want to go here?
- 29:00Data is readily available.
- 29:03Anyone can see it.
- 29:05It's valuable on the black market.
- 29:07It raises all sorts of issues
- 29:10of consent and control.
- 29:14Ownership and who controls data is not clear.
- 29:20Data is sold and resold and sold
- 29:23again and aggregated and sold again.
- 29:25And by the time, even when it's collected,
- 29:27the ownership isn't clear.
- 29:28But by the time and the
- 29:30control certainly isn't clear.
- 29:32By the time you get down this chain,
- 29:34who knows? OK, how widely should
- 29:38we be collecting all this data?
- 29:40What should we do about consenting?
- 29:44How much should patients know about all this?
- 29:46How much can they know?
- 29:48Because who knows what's
- 29:49happening with that down the Pike,
- 29:51or what happened tomorrow when it gets sold,
- 29:53or the company goes out of business
- 29:55and the data's out there or whatever.
- 29:57So how do we deal with consenting issues?
- 30:00How do we deal with even consenting?
- 30:03Whether this data should be
- 30:05used for this purpose?
- 30:06To develop an algorithm?
- 30:07Maybe I find that current purpose recognized.
- 30:10Maybe I want to contribute
- 30:12to science in this way.
- 30:14Nobody's asking me as a patient is
- 30:17it right to just assume that when I
- 30:19sign blanket consent for treatment,
- 30:21it's OK to say I should give
- 30:23blanket consent for whatever use
- 30:25of my data it wants to be?
- 30:27The same is happening with the non
- 30:29clinical data as well and I'm come
- 30:32back to that under the legal part.
- 30:36We know from a study I'm doing now
- 30:39that we have a paper in review for
- 30:42that data is widely quote shared
- 30:45by healthcare institutions that
- 30:47they vary in what their policies
- 30:51and procedures are for doing this,
- 30:54that those policies and procedures may
- 30:56not even be known by the people who are
- 30:59responsible for making the decisions.
- 31:00We found that in our interviews
- 31:02they were all very concerned about
- 31:04privacy and protecting patients,
- 31:06but they may not know how their
- 31:08institution is actually doing that
- 31:10and certainly patients don't know.
- 31:13So we have reason to think more
- 31:16about these kinds of policies and
- 31:19procedures and what they should
- 31:21be and what our institutional
- 31:22responsibilities are in that regard.
- 31:26HIPAA RE identification is
- 31:28getting easier and easier
- 31:30as data is getting combined.
- 31:33De identifying data doesn't
- 31:35require consent to share.
- 31:38So you've got issues there about
- 31:40what happens when it gets re
- 31:41identified and how easy is it
- 31:43going to be to re identify.
- 31:48What about data breaches,
- 31:50of course that you know,
- 31:51we certainly need attention to cybersecurity.
- 31:54Everybody agrees HIPAA needs updating or
- 31:57cybersecurity measures need to be there.
- 32:00We've known this for many years.
- 32:02It's been debated for many years and
- 32:04there isn't agreement on what to do,
- 32:06so it just stays the way it is.
- 32:09And that's problematic on data
- 32:14that's not covered by HIPAA.
- 32:15Data on your mobile apps, for instance.
- 32:18Data on your social media?
- 32:22That data is governed by the end user
- 32:25agreements that you click through.
- 32:27Who reads an end user agreement?
- 32:30How many of you have ever
- 32:31read an end User Agreement?
- 32:33A few of you.
- 32:34OK, And if you read it,
- 32:36you'd find it's incredibly long.
- 32:38Like Hamlet, the privacy policy,
- 32:41if it's there at all,
- 32:42is buried in some place, who knows where.
- 32:45It is extremely expansive,
- 32:47and it's often not even followed.
- 32:50So that becomes a problem too.
- 32:52And if you want to use that app,
- 32:55you click through it,
- 32:57right?
- 32:58The Federal Trade Commission
- 33:00regulates how well companies comply
- 33:04with those end user agreements,
- 33:09and you have to bring a case for
- 33:12them to even know that there may
- 33:13be an issue which you cannot do
- 33:15if there's a HIPAA violation.
- 33:16Incidentally,
- 33:17individuals cannot sue if
- 33:20there's a HIPAA violation.
- 33:23We're developing more and more ways of
- 33:25protecting some of this data technologically.
- 33:28It also has problems for
- 33:30data quality because,
- 33:32for instance,
- 33:33data segmentation to keep the stigmatizing
- 33:35data out of the sharing may remove
- 33:38really important clinical information
- 33:40that's needed for that patient.
- 33:42So again, you're protecting privacy,
- 33:45but you're possibly impairing care.
- 33:49Some data has to be identified.
- 33:51If you want to do clinical studies on it,
- 33:53you need to know who the patients are.
- 33:55So there are all sorts of
- 33:58fraught privacy issues here.
- 34:00There are issues having to do with
- 34:03not only harming the individual,
- 34:05but also harming anybody
- 34:07connected with that individual.
- 34:09Whatever people are included in the group,
- 34:11that person is classified into
- 34:13by these various algorithms.
- 34:15Whatever people are in the home when
- 34:17you have sensors collecting data.
- 34:18So there are important privacy issues,
- 34:21not just for the patient involved,
- 34:23but also for many other people
- 34:26connected with the patient.
- 34:27What about consent for them?
- 34:30Whoever asks them?
- 34:33OK,
- 34:34so another consenting issue has to
- 34:36do with as we're getting more and
- 34:39more concerned about how these data
- 34:42are collected and used in privacy,
- 34:44people are losing confidence.
- 34:46Again,
- 34:47we've seen studies on this over at
- 34:50least the past 20 years that you
- 34:53don't have clinical confidentiality.
- 34:55How much can you trust that your
- 34:57information is going to be protected?
- 35:00That affects what you tell a
- 35:03doctor if you're a patient,
- 35:04it affects what you might put in
- 35:06the record if you're the doctor.
- 35:08And that affects certainly the
- 35:10care that's given if you don't
- 35:12have good information.
- 35:13And it affects the quality of
- 35:16the data on which subsequent
- 35:17analysis and research and
- 35:20algorithms are developed.
- 35:21So again, lots of potential
- 35:24interactions of issues here.
- 35:33These issues cut across all of AI,
- 35:36not just in healthcare.
- 35:38I'm using healthcare as examples,
- 35:40and I want to call out now
- 35:41some of the kinds of issues,
- 35:45some of which I just did and
- 35:54some of which come up in other applications.
- 36:02OK, automation bias comes
- 36:04up all over the place.
- 36:06If the computer says it,
- 36:08it must be true. Of course,
- 36:10you've got bias in the other direction also,
- 36:13so you don't want either of those.
- 36:16What kinds of values are embedded
- 36:18in the models we're seeing?
- 36:20Do you want to prioritize
- 36:22efficiency in healthcare?
- 36:23Do you want to prioritize time
- 36:26personally spent with patients?
- 36:27Do you want to prioritize resources?
- 36:29There's a lot of possible
- 36:31values that could go into this,
- 36:33and which ones should go there and
- 36:37which ones are being decided on and
- 36:40how are those decisions being made?
- 36:42You have ethical issues having to do with
- 36:45what happens when you commodify data.
- 36:48You're commodifying information
- 36:49about me and you and you and you.
- 36:53You are reducing a person to the data.
- 36:57This is rather reductionist.
- 36:59It's objectifying the data at
- 37:02the expense of the real person.
- 37:05And that has implications for patient
- 37:09care because you end up at times
- 37:13treating your patients not as patients,
- 37:16not as people,
- 37:17but the data representations of them.
- 37:19Even if you're doing telemedicine,
- 37:22you're getting some sort of curated
- 37:24transmission of certain kinds of things
- 37:27and not others about the patient.
- 37:29Although telemedicine again has been
- 37:31a great boon to people who otherwise
- 37:34would not easily be able to get care,
- 37:39we have issues having to do
- 37:42with self fulfilling prophecies.
- 37:44Oh, she'll never walk again?
- 37:46All right, I give up.
- 37:48Or no, I'm not giving up.
- 37:50I'm really going to make sure
- 37:51that I walk again
- 37:57all kinds of possible self
- 37:59fulfilling prophecies.
- 38:01My life expectancy is coming
- 38:03out really low according to all
- 38:06these predictive algorithms. OK,
- 38:11why should I pay any attention
- 38:13then to all the clinical
- 38:15suggestions for taking care of me?
- 38:17What's the point? I may as well
- 38:19go out and enjoy what I have or,
- 38:21oh, I'm going to die anyway.
- 38:24All right, let me die.
- 38:26So how much do these affect people?
- 38:28And they're going to affect
- 38:30different people in different ways,
- 38:31and they're going to affect clinical
- 38:33people in different ways as well.
- 38:35What decisions you as a clinician make
- 38:38will be influenced by these predictions.
- 38:43What about what the patient wants here?
- 38:46How do those values come
- 38:48in to those decisions?
- 38:50They're not built into the predictions.
- 38:52You need to take account of them somehow.
- 38:56Some patients will prefer this,
- 38:59some patients will prefer that.
- 39:03Those decisions need to be made together.
- 39:09It's certainly changing relationships.
- 39:11That's one of the ways what I just described.
- 39:15But there are other relationships too.
- 39:17Patients have access to far more
- 39:20information than they had before.
- 39:22We've reduced the power asymmetries by
- 39:24having all that information available.
- 39:26Doctors have access to far more
- 39:30information than they had before,
- 39:31even if they're off in the middle of nowhere.
- 39:34So we've reduced those power asymmetries.
- 39:37We have much better access to knowledge.
- 39:40We have much more empowerment of
- 39:43all the parties involved for this,
- 39:46but it also puts a burden on us to have
- 39:48to know more about the technologies
- 39:49and how they work and how to assess
- 39:52all that information and whether
- 39:53it's true and whether it's valuable
- 39:55and what it means for me and so on.
- 40:00We have, as I mentioned,
- 40:01effects on the work and labor force,
- 40:04upskilling, downskilling,
- 40:07guidance that's being given for people
- 40:09who do not have the kind of expertise
- 40:12that used to be required for giving.
- 40:14Clinical care and giving guidance, we have
- 40:20systems that can see what's invisible in
- 40:23images to the betterment of diagnosis.
- 40:25So that we're doing some
- 40:27remarkable things in diagnosis.
- 40:29And we're also finding that the
- 40:33systems can see things in images
- 40:35that you can't see that are totally
- 40:38irrelevant and skewing diagnosis,
- 40:40such as artifacts that came from
- 40:43that particular institution,
- 40:44as has happened and got discovered
- 40:46when you try to move the system
- 40:47somewhere else and all of a
- 40:49sudden it's not accurate anymore.
- 40:50We need to be paying attention to
- 40:53those kinds of issues that may
- 40:54come up when we use these systems.
- 40:57That's then says that we have many,
- 41:02many suggestions that have been
- 41:05promulgated all across the world
- 41:07for how to improve AI,
- 41:09how to make sure the AI is doing
- 41:11what it needs to do in ways
- 41:13that are ethical and important.
- 41:16We will get to those in just a moment.
- 41:19The other issue I want to raise
- 41:22first is the resource issue.
- 41:24It's another ethical issue.
- 41:25It's not peculiar to medicine,
- 41:27but it's certainly here in medicine.
- 41:30The AI systems are extremely
- 41:33legally intensive. I'm sorry,
- 41:36They're extremely resource intensive.
- 41:38They're legally intensive too.
- 41:39And I'll get to that shortly.
- 41:41OK, so they require a lot of power.
- 41:45They require a lot of minerals
- 41:47to manufacture the components.
- 41:49They require a lot of cheap labor to
- 41:51label things that are going to go
- 41:53into the data that are being analyzed.
- 41:55A lot of that labor may
- 41:56be from poor countries.
- 41:59They require tremendous computing facilities.
- 42:03They require a huge expenditure
- 42:05that only some companies,
- 42:07some organizations,
- 42:09some governments can pay for.
- 42:12Is this the best use of those resources?
- 42:17Maybe some of those government programs
- 42:19would be better spent on improving water
- 42:22or food or medication prices or all
- 42:26kinds of things instead of putting it
- 42:29into the new technologies. Maybe not.
- 42:31Maybe it's going to vary by where you are.
- 42:34So important issues to be considered here.
- 42:43So let me get to some of the
- 42:45AI legal issues I mentioned.
- 42:50First of all, all of the promulgation
- 42:53of various sorts of guidelines
- 42:55for ethical systems called
- 42:57Trustworthy AI, Responsible AI,
- 43:02Ethical AI.
- 43:03Many organizations have done this,
- 43:06including the World Medical Association,
- 43:08the World Health Organization,
- 43:10the AMA, the federal government,
- 43:12the European Union, all kinds of places.
- 43:15So a lot of overlap in
- 43:17these various principles.
- 43:19The principles are extremely
- 43:21hard to operationalize.
- 43:22Let me give you an example.
- 43:25Fairness, transparency, and so on.
- 43:27I'll talk more about transparency
- 43:29and the legal issues.
- 43:30Fairness, what's fair?
- 43:33The organ allocation algorithm
- 43:36was updated not that long ago
- 43:40because of concerns about fairness.
- 43:42How should organs for
- 43:44transplant be allocated?
- 43:45What's fair?
- 43:46Do you do this to ensure equitable
- 43:51geographic distribution?
- 43:52Do you do it based on the age
- 43:56of the potential recipient?
- 43:58Do you do it based on the severity
- 44:00of illness as predicted by some
- 44:03algorithm of the potential recipient?
- 44:05Do you do it according to how many
- 44:08dependents the the recipient may have?
- 44:10Do you do it according to how long the
- 44:14person's been on the waiting list?
- 44:16All kinds of issues can come
- 44:18up here as to what's there,
- 44:20and there's not universal agreement on that.
- 44:22So who should make these decisions and
- 44:24how should these decisions get made?
- 44:29There's some very interesting
- 44:30study that's been done of how that
- 44:32was done with organ allocation,
- 44:34But these kinds of issues come
- 44:35up over and over and over
- 44:37again about almost anything.
- 44:42Some of the other kinds of
- 44:46guidelines include transparency.
- 44:47That's going to get me to legal issues in AI,
- 44:51privacy I've already talked about.
- 44:54But I want to add one other thing here.
- 44:56There is no omnibus privacy
- 44:58regulation in the United States.
- 45:00It's regulated by sector,
- 45:02and in healthcare that sector is
- 45:05regulated by HIPAA for clinical care,
- 45:08and it's regulated by the Common
- 45:11Rule for Clinical Research,
- 45:12Human Subject Clinical Research. OK.
- 45:17Data is being used in many ways.
- 45:19Data included from patient
- 45:22records for credit scoring,
- 45:24for insurance decisions,
- 45:27for employment decisions,
- 45:29for bail decisions,
- 45:30for all kinds of things.
- 45:32You may want to think about
- 45:34that when you're thinking
- 45:36about data and privacy,
- 45:38Should it be used those ways?
- 45:42What about how good the systems are?
- 45:45Well, the FDA regulates medications
- 45:50and it regulates what's called medical
- 45:53devices for safety and efficacy.
- 45:56Electronic health record systems are not
- 45:59medications and they're not medical devices.
- 46:01They're not regulated software is
- 46:04a medical device has a whole bunch
- 46:06of criteria it has to meet in order
- 46:09to be considered a medical device.
- 46:11And then there's lots of different
- 46:13kinds of medical device classifications
- 46:14with different levels of how
- 46:16much they're regulated.
- 46:19One of the things that means is that
- 46:21they're not vetted by the FDA. OK.
- 46:25Certainly if they're commercial,
- 46:27they're not medical devices.
- 46:29I mean, commercially available to all of us.
- 46:31I don't mean commercially
- 46:33available to hospitals.
- 46:35And those aren't vetted either.
- 46:38The Federal Trade Commission again,
- 46:40makes sure that the devices do
- 46:42what they're supposed to do,
- 46:43the apps do what they claim to do and so on.
- 46:45But who's watching?
- 46:48OK, so again,
- 46:49you have issues having to do with
- 46:51accuracy about these sorts of things.
- 46:57Who's liable then if things don't work,
- 47:00or if they are problematic?
- 47:03The developer? The vendor?
- 47:06The purchaser? The clinician?
- 47:09The people who install it?
- 47:11The people who use it?
- 47:15This is legally rather unclear.
- 47:17What isn't unclear, though,
- 47:19is that the contracts that
- 47:21are written, for instance for
- 47:23electronic health record systems
- 47:27disavow liability.
- 47:28We've known this for over 20 years.
- 47:32It's not easy to find out because the
- 47:35contracts are not publicly available.
- 47:37But some of the contracts we've seen,
- 47:40we know, have clauses in them
- 47:43that say you're a clinician,
- 47:45you're a pharmacist,
- 47:46you're a radiologist,
- 47:48you're a General practitioner,
- 47:50You're supposed to know you
- 47:52are a learned intermediary.
- 47:53You are an expert in this area.
- 47:55And because you are a quote
- 47:57learned intermediary,
- 47:58you are responsible for
- 47:59any decisions you make,
- 48:01even if you are making those
- 48:03decisions based on wrong information
- 48:04coming out of our systems.
- 48:06OK, So you are responsible.
- 48:11That means you have to be paying attention.
- 48:13You have to be there.
- 48:15I think it's one of the reasons
- 48:17you have the human in the loop
- 48:19criteria for ethical systems.
- 48:20But you can't always have
- 48:22a human in the loop.
- 48:23Where's the human in the loop?
- 48:24On an automated insulin pump,
- 48:27just as one example,
- 48:30OK.
- 48:31We have then also issues about
- 48:35transparency built into these
- 48:38contracts because the contracts
- 48:41themselves are protected
- 48:43as intellectual property.
- 48:45The screenshots are protected
- 48:46as intellectual property.
- 48:47So you can't show them to somebody else,
- 48:50particularly outside your organization.
- 48:51The errors you might think are there,
- 48:54you can't tell people outside
- 48:56your organization because that's
- 48:58all prevented by the contracts.
- 49:02How the algorithms work, we don't know.
- 49:06That's also protected by
- 49:07intellectual property,
- 49:08whether it's in electronic N
- 49:10record systems or anywhere else.
- 49:11We don't know how they're trained.
- 49:12We don't know what data they're trained on.
- 49:14We don't know how they work.
- 49:16We don't know how they reach decisions.
- 49:18We don't know how accurate they are.
- 49:20We don't know very much about them at all.
- 49:27So having transparency becomes very
- 49:32difficult on liability and malpractice
- 49:35standards of care are changing
- 49:36as these systems are being used.
- 49:38So that affects also your responsibilities
- 49:42as clinicians and what you may be
- 49:45found liable for and what you may not.
- 49:47And right now, based on what limited
- 49:50evidence we have from the legal system,
- 49:52you're damned if you do and
- 49:53you're damned if you don't.
- 50:00So I think there's things we can do and
- 50:03should be paying attention to or more
- 50:06attention than you're already paying to.
- 50:08We as bioethicists, we as clinicians,
- 50:11we as patients, we as IR BS,
- 50:15we as people who make policy,
- 50:18everybody needs to understand better
- 50:20some of these issues and some of the
- 50:25technology that is related to these issues.
- 50:28The legal and risk departments
- 50:29need to know that.
- 50:30My understanding from the legal people
- 50:32I know is that the legal departments
- 50:35don't have a clue about a lot of this.
- 50:38We need there for better education
- 50:40all across the board.
- 50:42That's always going on in medicine.
- 50:44We have to learn about new
- 50:48treatments. We have to learn about new drugs,
- 50:50We have to learn about new guidelines.
- 50:52We have to always keep learning.
- 50:53And now we also have to include
- 50:55this in what we have to keep
- 50:57knowing and learning about,
- 50:58including the ethical and legal and
- 51:02social issues that accompany it. To me,
- 51:05that means also we need better studies.
- 51:07We need better information to know how
- 51:10do these systems work in real life?
- 51:12How do they work in real practice settings?
- 51:14How do they work in the home?
- 51:16How do they work in the clinic?
- 51:19And to do those studies well,
- 51:21we need multidisciplinary methods
- 51:23and approaches and questions.
- 51:27It's important to do those
- 51:29studies not only at the outset,
- 51:31but with ongoing periodic
- 51:33evaluation of these systems,
- 51:35not just how effective they are,
- 51:37not just the outcomes that
- 51:38are coming out of them,
- 51:39but what sorts of ethical and
- 51:42patient centeredness issues may
- 51:44be coming up here as well.
- 51:46For that we need much more patient
- 51:48and community involvement so
- 51:50that the values of everybody
- 51:53are taken into account of,
- 51:55so that we try to accommodate as
- 51:57best we can all the differences all
- 52:00across the way people are and these
- 52:04kinds of questions and issues and
- 52:07considerations are common to all
- 52:10information technologies in healthcare.
- 52:12They are common to information
- 52:15technologies outside of healthcare.
- 52:17And what we can learn from other
- 52:19fields is important here too.
- 52:21And what we can teach other
- 52:22fields is important here too.
- 52:26The common issues that have come up
- 52:28over and over and over again within
- 52:30medical informatics and now more
- 52:32in bioethics are privacy, consent.
- 52:35More recently, bias.
- 52:37Bias is built into the way the data is
- 52:40collected, the way the data is analyzed,
- 52:42the way the data is trained, and so on.
- 52:44And quality of care issues.
- 52:46There are more issues than that.
- 52:48I've touched on a bunch of them.
- 52:50There are more I'm sure that are coming up,
- 52:53that have come up and that will come up.
- 52:55So I think we're going to end up
- 52:59again with visions that are utopian,
- 53:03visions that are dystopian,
- 53:04will end up somewhere in the middle.
- 53:06I think it's up to us where
- 53:08the middle is going to be.
- 53:10And that may be different
- 53:12for different people.
- 53:13And I hope I have inspired you
- 53:15then to think about how to get to
- 53:18those wonderful possibilities that
- 53:21we have and improve healthcare,
- 53:24treat patients as people,
- 53:27not as representations by data,
- 53:29not as sources of data to be monetized.
- 53:32How we deal with the data,
- 53:34how we deal with the records,
- 53:36how we deal with the technologies
- 53:38is up to us.
- 53:39And I will be delighted to
- 53:42continue this discussion.
- 53:43There's a lot more to be said,
- 53:45a lot of questions to be raised,
- 53:46a lot of ideas here.
- 53:48We need all of you to do that.
- 53:50And if you want CME credit for this,
- 53:54the number is right there.
- 53:56And how to reach me is right there too.
- 53:58Thank you.
- 53:58Do
- 54:04you want to want to have a seat up there?
- 54:06And I'm sure up there is fine. So.
- 54:11So what we'll do next is Karen
- 54:14or actually Sarah, volunteered.
- 54:15Sarah, if you want to do this,
- 54:16this will be very kind.
- 54:22Here we go. So I would ask you to
- 54:24raise your hand and Please wait
- 54:25until Sarah gets there with the mic.
- 54:27And as you know Sir,
- 54:28I try and mix it up so various.
- 54:30So I'll let you know who who looks like
- 54:32they shouldn't be allowed to speak.
- 54:34But I don't think we have
- 54:34anything like that here.
- 54:35So we'll we'll try and hear
- 54:37from many different folks.
- 54:38I wanted to take the prerogative
- 54:40to to take the first question and
- 54:42let it be kind of a big vague one.
- 54:44This was, this was.
- 54:45I mean, I was sitting here taking
- 54:47notes because there's so much
- 54:49this that I want to understand.
- 54:50So we talked about bioethics
- 54:52and how bioethics kind of looks
- 54:54at AI and AI capabilities,
- 54:56things like privacy and consent.
- 55:00I want to look for a minute
- 55:01about what AI has to offer
- 55:03specifically to bioethics and get
- 55:04your take on that a little bit.
- 55:06And here's to give you a specific,
- 55:07concrete question.
- 55:10As I understand it a layman's
- 55:11understanding about this one
- 55:13could have a system right
- 55:14that is capable of reading
- 55:18and and the context I want to ask my
- 55:20question is with a bioethics console right,
- 55:22medical clinical ethics console,
- 55:23what should we do in this
- 55:24particular clinical ethics case.
- 55:26So what happens is it comes up to
- 55:28individuals and a group of two
- 55:30or three individuals or three
- 55:31or four individuals sit here,
- 55:33the information get information
- 55:35from the patient's record,
- 55:36but primarily from here's the
- 55:37patient tells the story or the
- 55:39patient's family tells the story,
- 55:40then the clinicians tell their story
- 55:41and then the other have a conversation
- 55:43with these people and then come up
- 55:44with some recommendations. Right.
- 55:46That's how ethics consults work.
- 55:48Most of you know some of you are have
- 55:50been involved in those for a long time.
- 55:52Of interest,
- 55:52a question now we hope that the
- 55:55consults are only as good as the
- 55:57moral compass of and to some extent
- 55:59expertise of those ethics consults.
- 56:01So imagine a system that has read
- 56:05all of the Greek philosophers,
- 56:06that has read all of the modern philosophers,
- 56:09right? And that has read, you know,
- 56:13that six volume encyclopedia of bioethics.
- 56:15Read all of that.
- 56:16It's all there.
- 56:17This system has this has
- 56:18access to far more learning,
- 56:20if you will have far more information
- 56:23than does any human ethics consultant.
- 56:26That doesn't necessarily speak to judgement,
- 56:28but nevertheless,
- 56:28is there a setting in which it
- 56:31could happen where one could ask,
- 56:33And I'm not saying to fly automatic pilot,
- 56:35imagine that Jack Hughes is doing
- 56:36an ethics console next week.
- 56:38OK, we're going with the gold standards.
- 56:39We got Jack Hughes up here.
- 56:41So Jack is doing an ethics
- 56:42console and he says, well,
- 56:43I'm not putting all this stuff together,
- 56:45but why not also ask the Kaplan device?
- 56:48Because the Kaplan device could maybe come
- 56:50up with some things I hadn't thought of.
- 56:52The Kaplan device, say?
- 56:53Well,
- 56:53of course,
- 56:53in that in that landmark case
- 56:55from 1962 of the logus versus
- 56:56the bogus and Jack saying she's
- 56:58didn't even know about that case.
- 57:00But the Kaplan device knows about
- 57:01that case because the Captain Rice
- 57:03has read a lot more than Jack has.
- 57:04So Jack,
- 57:05while can bring forward years
- 57:07of experience and judgment and
- 57:09a great deal of expertise,
- 57:10perhaps would say,
- 57:11well,
- 57:12why not augment my expertise
- 57:13and maybe even my judgment.
- 57:15Let's see what the Kaplan device has
- 57:18to say about this because it might
- 57:19actually have insights that I lack.
- 57:21Do we have that ability?
- 57:22Should we pursue it?
- 57:24We do have that ability.
- 57:26We should pursue it.
- 57:27But you need to pursue it with caveats.
- 57:29I didn't talk about the important issue
- 57:32of how you benchmark these systems.
- 57:34So, for example, Jack has a lot of clinical
- 57:38expertise and a lot of bioethics expertise.
- 57:40As you do, and as many of the people here do,
- 57:44do you want to benchmark the system against
- 57:47experts like you in ideal circumstances?
- 57:52Do you want to benchmark your systems against
- 57:54most clinical people who are not experts?
- 57:56In ideal situations?
- 57:58Do you want to benchmark them against?
- 58:01Maybe. This is a resource.
- 58:02Poor area and we don't have
- 58:04much expertise here at all.
- 58:05So one way is you have to think about
- 58:08how do you want to build these systems
- 58:10in terms of how they're going to be used.
- 58:13Another thing, of course,
- 58:15is we know already from a lot of experience
- 58:18over the past year that you get all
- 58:20sorts of interesting fabrications,
- 58:21hallucinations, false references.
- 58:23Your bogus court case may actually be bogus.
- 58:27You have to check, OK?
- 58:30And no matter how much you've read,
- 58:35you have this patient with this
- 58:37family right here, right now.
- 58:39Decisions that have to get made.
- 58:41And I think the parts that you bring
- 58:44to that is caring, humane people,
- 58:46as well as expert clinicians and ethicists
- 58:50are really important as part of that input.
- 58:53But why not take advantage of Morax
- 58:56information that may be helpful?
- 58:58I think it would not.
- 58:59This is another issue about patient
- 59:01consent to there's been argument in
- 59:04the field over should patients have to
- 59:06consent to the use of these systems or
- 59:08should patients have to be told that
- 59:10they're not being used and consent
- 59:12to that because sometimes the systems
- 59:15are better than the doctors are.
- 59:18So you've got a number of really
- 59:20interesting issues here,
- 59:21but my gut sense is certainly
- 59:23take advantage of whatever you can
- 59:25take advantage of. Why not?
- 59:28Thank
- 59:28you. Thank you. And Jack,
- 59:30thanks for playing along. Yeah.
- 59:32And good job with that, That, this, this.
- 59:34I think we started a whole new we.
- 59:36We've got a whole new genre of
- 59:37possible scenarios with Jack syndrome.
- 59:39I could ask Jack GPT to write this
- 59:41talk for me too, but I didn't. So Jack,
- 59:44what would you do in that situation?
- 59:46Would you consult some sort of AI system? I'd
- 59:50be delighted. I I would be delighted, Yeah.
- 59:52What I would do with it, I don't know.
- 59:54It depends on whether I liked the response.
- 59:56So. But it couldn't hurt to look
- 59:59and see what it had to say exactly,
- 01:00:00right. It may, it may change
- 01:00:02your thinking some way.
- 01:00:05Thank you. Atta oh, Atta's point
- 01:00:06in that way. There we go. I
- 01:00:08just want to follow on.
- 01:00:09I love that you said it depends if
- 01:00:11I like the response because I think
- 01:00:14there's something about human gut,
- 01:00:16human gut reactions that's not
- 01:00:18accounted for in these systems.
- 01:00:21And so that's why I really
- 01:00:23like how you added that.
- 01:00:24Peace, because that's not something
- 01:00:26you're going to get out of the
- 01:00:28clinical decision support system.
- 01:00:29Kaplan Incorporated, whatever.
- 01:00:31Odd. I'm going to stand up because I
- 01:00:33can't see you otherwise. Thank you.
- 01:00:35Thank you. Someone over here had a question.
- 01:00:37There we go.
- 01:00:38Can you talk a little bit about the
- 01:00:40argument that the existing system
- 01:00:42perpetuates existing inequities?
- 01:00:44And so we have like, the moral
- 01:00:46imperative to try something different.
- 01:00:47So like, are we moving too
- 01:00:49fast or are we moving too slow?
- 01:00:51Bradley speaking.
- 01:00:52And if I can be selfish,
- 01:00:53I'll stack on a second question,
- 01:00:55which is to ask you to talk
- 01:00:57about international differences
- 01:00:58and ethics and legal frameworks.
- 01:01:00OK, let me take the second part first,
- 01:01:03because there are different value
- 01:01:07systems at play internationally.
- 01:01:09The US is very individualistic,
- 01:01:11focused and prioritizes
- 01:01:13autonomy and individualism.
- 01:01:16It's not quite so human rights
- 01:01:17focused as you have, say,
- 01:01:19in Europe, or much more collective
- 01:01:22values that you may have in Asia.
- 01:01:24And I'm not trying to
- 01:01:26isolate various countries.
- 01:01:27I'm trying to point out that you get
- 01:01:30different kinds of values that are
- 01:01:31prioritized in different places.
- 01:01:33That needs to be taken account
- 01:01:35of when these systems are built.
- 01:01:37They need to be taken account of when
- 01:01:39they're moved from one place to another.
- 01:01:40They need to be taken account
- 01:01:42of when you're assessing,
- 01:01:44should we have this system,
- 01:01:46should we should Jack follow this advice?
- 01:01:48Maybe this advice is based on a
- 01:01:50set of values that are not local
- 01:01:54and I've used internationally,
- 01:01:56but certainly localities to different
- 01:01:58communities here have different ways of
- 01:02:00thinking about and approaching things.
- 01:02:02The biases are built in and we need to
- 01:02:05be aware of that and we need to try
- 01:02:07to counteract that in various ways.
- 01:02:09And there's all sorts of techniques
- 01:02:11for trying to do that that
- 01:02:12sometimes cause different biases.
- 01:02:14But it's something that needs
- 01:02:16a lot more attention,
- 01:02:17especially when you're amplifying
- 01:02:19them by using previous data to take
- 01:02:23outputs to build more inputs on,
- 01:02:27to make more outputs that are even
- 01:02:29more biased because now you have
- 01:02:31even more biased sets of data.
- 01:02:33So those are important issues and
- 01:02:35they've been coming up a fair amount.
- 01:02:36It's one of the newer issues that we're
- 01:02:39seeing because there wasn't a whole
- 01:02:40lot of attention paid this to this
- 01:02:42before until we started to have this
- 01:02:45tremendous burgeoning of AI systems.
- 01:02:46I think you asked me something else,
- 01:02:49but I'm not certain what it was.
- 01:02:54I guess it was just like whether
- 01:02:56we're moving too fast or too slow
- 01:02:58and then like maybe how the way
- 01:03:01that these tools scale in a way
- 01:03:03that like the bias intrinsic in one
- 01:03:05individual human doctor might not
- 01:03:09in terms of moving too fast or too slow,
- 01:03:11It depends who you are.
- 01:03:13There is clearly a arms war
- 01:03:16going on here that's important
- 01:03:19strategically for national purposes,
- 01:03:21national strategic purposes.
- 01:03:23It's important strategically for
- 01:03:25various businesses and it's important
- 01:03:29strategically for healthcare systems
- 01:03:31that are also businesses and want to
- 01:03:33attract patients with the best possible
- 01:03:35care they could give at the least
- 01:03:38possible price they could produce it
- 01:03:40with the most access you could acquire,
- 01:03:42which of course are all in some ways
- 01:03:45competing with each other as values.
- 01:03:47So it's not an easy question to answer
- 01:03:49and it's going to depend on who's
- 01:03:51making that judgment as to answer it.
- 01:03:54We know that it's very difficult for
- 01:03:57regulation to keep up with technology
- 01:03:59that whatever you do regulatorily is
- 01:04:01going to change by the time it's passed
- 01:04:04in terms of where the technology is.
- 01:04:06So that's not going to be
- 01:04:08an easy solution either.
- 01:04:10But I think in some of the advocacy work
- 01:04:12and some of the attention we're paying
- 01:04:13and some of the ways you can influence
- 01:04:15decisions to get made where you are,
- 01:04:17those are some of the kinds of things
- 01:04:20you could do to help shape where it goes.
- 01:04:23We've seen this already with things like,
- 01:04:25you know, user protests on Facebook.
- 01:04:28Facebook does all kinds of things
- 01:04:29that every once in a while people say,
- 01:04:31hey, that's enough,
- 01:04:34OK,
- 01:04:35So you're not powerless
- 01:04:39and where that's going to go.
- 01:04:41What I'm hoping for by doing
- 01:04:43talks like this is that it goes in
- 01:04:46directions that we're all going
- 01:04:47to find valuable and we can try.
- 01:04:51Thank you so much. Next question, Sir.
- 01:04:59So you mentioned sort of how like
- 01:05:03United States has certain values
- 01:05:05in other countries or other areas
- 01:05:07would have different values and we
- 01:05:08have to create these systems with
- 01:05:10a sort of value based approach.
- 01:05:11But do you see a future where since
- 01:05:15the like even the distribution of
- 01:05:17like values in this room is probably
- 01:05:19so different that we end up creating
- 01:05:22systems that are making clinical
- 01:05:24decisions without any regard to the
- 01:05:26diversity of values that exist like
- 01:05:28within a given population? Well,
- 01:05:30that was part of why I was saying
- 01:05:32you need more than the system.
- 01:05:34You need more than whatever the
- 01:05:36algorithm puts out because people
- 01:05:37are going to be using those systems.
- 01:05:40People are going to be affected
- 01:05:41by the systems, and that's where
- 01:05:43also some of the values come in.
- 01:05:45Of course, the systems are built
- 01:05:47with certain values built in as well.
- 01:05:49And if you know what those are,
- 01:05:51then you can try to take account
- 01:05:53of them for better and for worse.
- 01:05:54In terms of the decisions that you make,
- 01:05:57I do not think one-size-fits-all.
- 01:05:59You have many.
- 01:06:01Many variations in all sorts
- 01:06:03of circumstances,
- 01:06:03and that's part of why I'm saying
- 01:06:06there's a problem here when you're
- 01:06:08treating only data and basing
- 01:06:10recommendations only on AI outputs
- 01:06:11or any other kind of outputs.
- 01:06:15I have a quick question for the audience
- 01:06:17and then I have a question online.
- 01:06:18So the the question for the audience.
- 01:06:20This I am getting back to the
- 01:06:22initial scenario about when we have
- 01:06:23AI giving us one set of advice.
- 01:06:25Perhaps our human intuition
- 01:06:26tells us another right?
- 01:06:27Our human tuition informed by
- 01:06:29our values as well as our bias.
- 01:06:31Any pilots in the room? No.
- 01:06:33So I I come from a family of pilots.
- 01:06:35I'm not a pilot.
- 01:06:35I come from a family.
- 01:06:36And I learned my brothers
- 01:06:37as teenagers became pilots.
- 01:06:38And in a nutshell and in a nutshell
- 01:06:41if you're flying over the water and
- 01:06:43you can't see any lights anywhere and
- 01:06:45what they learned pretty early on
- 01:06:47and you know for a fact that you're
- 01:06:49right side up and the instruments
- 01:06:50tell you that you're upside down,
- 01:06:52what do you think you're upside down?
- 01:06:56And so you know so in terms of whether
- 01:06:58whether or not we don't trust technology,
- 01:07:01you know some things we ask the
- 01:07:02technology to do is to tell us is
- 01:07:03this or is this not necrotizing
- 01:07:04enterocolitis that I'm looking at
- 01:07:06on this X-ray that's very different
- 01:07:08than is the judgment here that we
- 01:07:09should favor the father's opinion
- 01:07:11or the mother's opinion.
- 01:07:12But but that interface of our
- 01:07:13judgment which carries with it.
- 01:07:15It's our biases with the with the computer.
- 01:07:17I think it's a fascinating one.
- 01:07:18But let me let me read a question to
- 01:07:20you from oh oh that's good because I just.
- 01:07:22Karen,
- 01:07:22you want to come up here and
- 01:07:24then Apple listening back.
- 01:07:26Meantime,
- 01:07:26Sarah's got something to say
- 01:07:27follow. Oh, sorry.
- 01:07:28I had a follow up comment to that,
- 01:07:29which I I think is really apartment,
- 01:07:32which is just, you know,
- 01:07:32this discussion sort of reminded me of
- 01:07:34the fact that I think too often we tend
- 01:07:37to conflate knowledge with expertise.
- 01:07:38And so you know the comment about whether
- 01:07:41you're right side up or upside down,
- 01:07:42you know that's one simple,
- 01:07:44presumably easily verifiable fact.
- 01:07:48Whereas a lot of clinical decisions
- 01:07:50are made based on probabilities and
- 01:07:52risk versus benefit, there often
- 01:07:54isn't one objectively right answer.
- 01:07:56It's very dependent on
- 01:07:57patient goals and values.
- 01:07:58And you know, even from my my own experience,
- 01:08:02for example, I, you know, I obviously I,
- 01:08:04you know, I'm a cardiologist,
- 01:08:05I went to medical school.
- 01:08:06I did an internal medicine residency.
- 01:08:08But I haven't practiced internal
- 01:08:09medicine for 10 years and I I certainly
- 01:08:12don't have expertise in internal
- 01:08:14medicine anymore and I would not
- 01:08:16feel comfortable practicing that.
- 01:08:17I can draw on my knowledge.
- 01:08:19I can Google a lot and I can probably sound
- 01:08:21reasonably intelligent speaking about it.
- 01:08:24You know maybe not terribly unlike
- 01:08:26an AI hallucination perhaps but I
- 01:08:28don't think I don't have any business
- 01:08:30for seeing internal medicine anymore
- 01:08:31which is why I let my boards lapse
- 01:08:33for for medicine not cardiology.
- 01:08:34But so you know and I and I think
- 01:08:38it's very telling the fact that these
- 01:08:41AI systems for example will sort of
- 01:08:44decline any liabilities as well.
- 01:08:46You know we they're they're sort
- 01:08:47of they're almost admitting that
- 01:08:49they're a little bit weak sauce
- 01:08:51compared with with clinical judgement.
- 01:08:53And I and I think that you know
- 01:08:54there's there's fact based knowledge
- 01:08:55where you know there's there's
- 01:08:57a clear right or wrong answer.
- 01:08:58Are you right side up or upside down?
- 01:09:00But often in clinical medicine
- 01:09:02the answers are a lot murkier.
- 01:09:04And so you know,
- 01:09:05I know that AI will probably be
- 01:09:08way more advanced even in 10 years,
- 01:09:09but I, you know,
- 01:09:10I I think now it's the argument's
- 01:09:13pretty clear that it's a,
- 01:09:13it's a supplement and it's not
- 01:09:15going to replace us anytime soon.
- 01:09:17So,
- 01:09:17Sarah, where we're seeing results coming
- 01:09:20out that EKG analysis by AI is much better
- 01:09:25than most physicians that that also.
- 01:09:27Yeah, I was going to that depends
- 01:09:29on the indication though, right?
- 01:09:30Yeah. How would you work with that?
- 01:09:32So we've actually been using EKG
- 01:09:34computer analysis for years and I
- 01:09:37think it's a really helpful tool.
- 01:09:39But again, we always use it as a supplement.
- 01:09:40And actually I'm teaching the
- 01:09:42cardiology course right now to the Ms.
- 01:09:43ones and I even tell them look
- 01:09:45like you don't need to be able to
- 01:09:47calculate the exact heart rate.
- 01:09:48The computer is going to be
- 01:09:49able to do that for you.
- 01:09:50But you do want to make sure you can
- 01:09:52roughly estimate the heart rate.
- 01:09:53So if the computer is saying the heart
- 01:09:56rate is 180 because it's double counting,
- 01:09:59the QRS is in the T waves,
- 01:10:00but actually the heart rate is only 90.
- 01:10:03Like you need to be able to know
- 01:10:04that because you know that treatment
- 01:10:05is going to be very different.
- 01:10:07So but the computer can sometimes
- 01:10:08pick up subtle things sometimes
- 01:10:10you know when I read EKGS,
- 01:10:12I'll do my own read and then I'll
- 01:10:14see what the the computer spits
- 01:10:16out and usually there's there's
- 01:10:18agreement most of the time.
- 01:10:19If I if you know there's something
- 01:10:21that the computer calls I can see why
- 01:10:23they called it but they misread some
- 01:10:25noise in the signal and so the signal
- 01:10:27to noise ratio is always an issue.
- 01:10:28There's still a lot of artifacts
- 01:10:30but in terms of predicting things
- 01:10:32like risk of cardiomyopathy in the
- 01:10:34future much more abstract things,
- 01:10:35EKG does really outperform clinicians.
- 01:10:38So I think again recognizing our
- 01:10:40mutual limitations and recognizing
- 01:10:42that it's a supplement rather than
- 01:10:44than replacing us because I've I've
- 01:10:46seen mistakes made with over reliance
- 01:10:48on on AI interpretation alone.
- 01:10:49So
- 01:10:50you know we asked that exact same
- 01:10:52question to some computers that
- 01:10:53read eggs and and what they told
- 01:10:55us is we see the cardiologist as a
- 01:10:58necessary supplement to our reading.
- 01:11:00I want to read a comment to you please,
- 01:11:02Bonnie and then a specific question.
- 01:11:04So the comment is this from from
- 01:11:06one of our Zoom folks here.
- 01:11:08As with Diagnostic Laboratory,
- 01:11:09test clinicians must ultimately weigh
- 01:11:11and integrate information from AI
- 01:11:13expert systems into their overall
- 01:11:15knowledge and understanding of the
- 01:11:17patient in crafting medical decisions.
- 01:11:19Other changes in the pacing and delivery
- 01:11:20of medical care have frayed the
- 01:11:23traditional physician patient relationship.
- 01:11:24Nevertheless,
- 01:11:25Peabody's dictum stands the secret
- 01:11:26of the care of the patient isn't
- 01:11:29caring for the patient.
- 01:11:30No AI system can replace that connection.
- 01:11:34I would just comment.
- 01:11:35Then if you want to come,
- 01:11:36I would just comment.
- 01:11:37Also,
- 01:11:37the AI system is less likely to say
- 01:11:39I'm going to favor this guy over
- 01:11:40that guy because this guy looks
- 01:11:42more like my family than that guy.
- 01:11:44The AI system is less likely to
- 01:11:46do that than I am or you are.
- 01:11:49Here's a question for you, please.
- 01:11:50There are many groups working to
- 01:11:53create ethical frameworks for AI.
- 01:11:55Many conferences, AI for good,
- 01:11:57UN UNESCO,
- 01:11:57and there's other government
- 01:11:59ones that are mentioned.
- 01:12:00The hallucinations and bias can be a
- 01:12:02real big problem to rewrite history.
- 01:12:05Do you have one or two agencies you
- 01:12:07recommend watching for guidelines
- 01:12:08for ethical guidelines?
- 01:12:09I believe in making AI part of the team,
- 01:12:11but it's not the whole team, Is there?
- 01:12:14Is there a particular place
- 01:12:15you recommend we go?
- 01:12:16There is not any particular
- 01:12:18place at this point that I would
- 01:12:21recommend for guidelines that
- 01:12:22you really should pay attention
- 01:12:24to and not anybody else.
- 01:12:26So there's a lot of
- 01:12:28overlapping guidelines. But
- 01:12:31again, I think they're going to vary
- 01:12:33with different kinds of places.
- 01:12:34They're going to vary with the mission
- 01:12:37of those agencies and they're going to
- 01:12:39vary with the degree to which you can
- 01:12:42actually operationalize the guidelines
- 01:12:43versus how much they're aspirational.
- 01:12:46And again, I'm going to take the
- 01:12:49stance of the more you can learn,
- 01:12:51the more sets of guidelines you have
- 01:12:53out there that you can learn from that.
- 01:12:55This one complements that one.
- 01:12:57That's where I think we are now,
- 01:12:58and that's where I think we need to
- 01:13:00be now as we work more on developing
- 01:13:03what really makes the most sense here.
- 01:13:06We've had a lot of experience with AI,
- 01:13:08but one of the things that's different
- 01:13:11with what's happening now with the
- 01:13:12large language models is that there's
- 01:13:14a lot more attention being paid.
- 01:13:15So there's a lot more great minds
- 01:13:17working on it.
- 01:13:18And there's also,
- 01:13:20it used to be that you needed
- 01:13:23clinical expertise to develop the
- 01:13:26models that were being used for AI.
- 01:13:28You had to have people clinically
- 01:13:31knowledgeable who were working
- 01:13:33with the developers to do that.
- 01:13:35You don't have that anymore.
- 01:13:36You're going off the data.
- 01:13:39That's a big difference and the clinical
- 01:13:42expertise needs to be there somehow.
- 01:13:44So that's another piece to think about.
- 01:13:47But this is a much more general
- 01:13:48question because it's not just medicine.
- 01:13:50You have exactly the same issues
- 01:13:53in all the kinds of uses that this
- 01:13:56data is being put to.
- 01:13:57Targeted advertising,
- 01:13:58bail decisions, insurance sales,
- 01:14:01credit scoring, mortgage decisions,
- 01:14:04employment, hiring decisions.
- 01:14:05I mean all kinds of things that
- 01:14:08are being done with AI that bring
- 01:14:11up the same kind of issues.
- 01:14:13So would you say that that the the
- 01:14:16cars that drive themselves fall
- 01:14:17into that same fall into that same
- 01:14:20bigger category when they take in all
- 01:14:22much data from all around and then
- 01:14:24make a decision about when to turn,
- 01:14:26how fast to go, etcetera,
- 01:14:27etcetera. And not only that,
- 01:14:28they're also monitoring you as
- 01:14:31you're driving or if they're not
- 01:14:33yet doing all of that, they will.
- 01:14:36So that if you seem like
- 01:14:38maybe you're a little sleepy,
- 01:14:41you shouldn't be driving.
- 01:14:42They may automatically lock.
- 01:14:43If you have gone somewhere
- 01:14:45where the rental company,
- 01:14:46the agreement says you shouldn't go,
- 01:14:48they may lock the data that's being sent.
- 01:14:51Monitoring your physiological signs
- 01:14:53can be sent all over. I mean,
- 01:14:56it's being transmitted back and forth,
- 01:14:57so it's lots of opportunities for
- 01:15:00it being intercepted and collected.
- 01:15:02It can be used by the police.
- 01:15:03It could be used for all kinds of things.
- 01:15:05So you've got a hope that's a
- 01:15:07nice example of the same kinds of
- 01:15:09all these kinds of issues keep
- 01:15:10coming up all over the place.
- 01:15:12And if we think about cars,
- 01:15:14we may get some insights into
- 01:15:16medicine and vice versa.
- 01:15:18Thank
- 01:15:18you. Question please.
- 01:15:19Yeah, yes. Speaking about the
- 01:15:21cross cultural perspectives,
- 01:15:23I had taken a medical anthropology.
- 01:15:27Yes, I had taken a medical
- 01:15:29anthropology class several years ago.
- 01:15:31We talked about why biomedical
- 01:15:33medicine actually works.
- 01:15:35One of it is the belief in the medicine,
- 01:15:38the trust in the clinician.
- 01:15:41Some say the placebo effect,
- 01:15:43but then there and and the fact
- 01:15:44that it really just does work.
- 01:15:45This drug will lower your blood pressure,
- 01:15:48lower your cholesterol.
- 01:15:50But then there was something else that
- 01:15:51they identified and called factor X
- 01:15:54and they said we don't know
- 01:15:55what it is, it just works.
- 01:15:57And I think that speaks to a lot about this.
- 01:16:00A is capability of not taking into
- 01:16:03account everything that we don't know.
- 01:16:05There is some factor X, you know,
- 01:16:08some distinction between one
- 01:16:10individual's belief system and another.
- 01:16:13My concern is not so much AI in
- 01:16:16assisting the clinician to give a better
- 01:16:19diagnosis or better treatment plan,
- 01:16:22but what happens when the
- 01:16:23insurance company says, you know,
- 01:16:25there's a disparity here between what AI
- 01:16:27is telling us and what you're telling us?
- 01:16:30We're going to go with AI and
- 01:16:31we're not going to approve this.
- 01:16:34I wonder if that's happening yet. Bonnie,
- 01:16:36do you know specifically the the point?
- 01:16:38That's a wonderful point. Thank you.
- 01:16:39But the question specifically
- 01:16:41about the insurance companies,
- 01:16:42I mean I've, I've been in situations
- 01:16:43perhaps you have where my physician
- 01:16:45says clearly the data suggests
- 01:16:46this is the drug you should be on.
- 01:16:47But no, the insurance company
- 01:16:49says it's not enough data.
- 01:16:50So you don't get it. Yeah.
- 01:16:52You don't need AI to have
- 01:16:53that problem, but but
- 01:17:00there are ways to sort of
- 01:17:01start prevent that as well.
- 01:17:03You know when you can as as you said, as
- 01:17:06you had mentioned, you know it,
- 01:17:08if if the diagnostic 5 diagnostic
- 01:17:10criteria is only four met,
- 01:17:12we can kind of circumvent
- 01:17:14that by adding the 5th.
- 01:17:16And that I think happens all the
- 01:17:17time in a clinical setting where
- 01:17:19you say we need to keep this
- 01:17:21individual a little bit longer,
- 01:17:22say their anxieties up or whatever.
- 01:17:26And also that raises another
- 01:17:28question about its use in psychiatry
- 01:17:31and fields where there's so much
- 01:17:34more than just diagnostic numbers.
- 01:17:39I'm going to add one other issue here
- 01:17:41about the data quality that also has to
- 01:17:43do with the gaming and the insurance.
- 01:17:45How many diagnosis do I have in order
- 01:17:49for medical insurance to pay for lab
- 01:17:51tests when I don't have those diagnosis?
- 01:17:55OK. So it's happening for me.
- 01:17:59It's happening for everybody.
- 01:18:00And all that data is there
- 01:18:04and once it's there,
- 01:18:05it's there, it's it's there.
- 01:18:07As, as we've all learned,
- 01:18:10as we've learned, it doesn't forget.
- 01:18:11You know, the privacy stuff seems to
- 01:18:12stand out as a particular concern,
- 01:18:14doesn't it? I mean,
- 01:18:15I have to say that that I know it.
- 01:18:17It wasn't long ago that I saw
- 01:18:19a physician and she said I want
- 01:18:21you to know that everything you
- 01:18:22say here is just between us.
- 01:18:23And I thought not a chance I
- 01:18:25thought that everything I say here
- 01:18:27anybody can find you know I mean
- 01:18:29it's you know how many times do
- 01:18:31we hear about how the our medical
- 01:18:32records have been invaded etcetera.
- 01:18:34So even you know even if it's
- 01:18:35not invaded just so.
- 01:18:36So the idea that this is confidential
- 01:18:38between me and my physician that's
- 01:18:39it's really hard to buy that privacy
- 01:18:41has been there all along as an issue.
- 01:18:44One of my gripes is that it's
- 01:18:46predominated as the issue for many,
- 01:18:48many, many years without sufficient
- 01:18:50attention to all these other issues.
- 01:18:52And it's still a big issue Despite that.
- 01:18:56But I don't think we should
- 01:18:58only be focusing on privacy.
- 01:19:00I think there are many other concerns
- 01:19:01and issues we need to focus on as well,
- 01:19:04and we certainly need to be
- 01:19:05paying more attention to privacy.
- 01:19:06That, as I said,
- 01:19:07is one of the effects that the
- 01:19:10large language models have brought
- 01:19:12to even more general attention.
- 01:19:14Thank you,
- 01:19:15Sir. In the back there please.
- 01:19:20Hi, I wanted to know do you think
- 01:19:22there is some field of medicine that
- 01:19:24should be sanctuarised against even the
- 01:19:27the capability of AI to block AI from
- 01:19:29accessing it in the sense that you want
- 01:19:32to to keep the human practice active.
- 01:19:35First of all I was thinking
- 01:19:36about surgery for instance.
- 01:19:37If you are like starting to rely more
- 01:19:39and more on AI to practice surgery
- 01:19:41and then you lose expertise over time,
- 01:19:44should should this field kind
- 01:19:45of be sanctuaries like blocked?
- 01:19:46Even if AI perform better than
- 01:19:47humans just to preserve kind of the
- 01:19:49knowledge over time like for sport
- 01:19:51let's say in 50 years suddenly we
- 01:19:52we don't have access to AI anymore
- 01:19:54for some reasons like as you said
- 01:19:56it's it might not be sustainable to
- 01:19:58to entertain over these long times.
- 01:20:00Do you think at that time you you could
- 01:20:02have a risk of dropping completely?
- 01:20:03We we don't have any more practitioner
- 01:20:05that can practice surgery at that
- 01:20:06time because you you rely so much
- 01:20:08on AI at that time, that's.
- 01:20:09You don't have the capacity anymore.
- 01:20:10I'm saying.
- 01:20:11I'm thinking about like crises that
- 01:20:12could arise for the pull over time
- 01:20:14that you could lose kind of the
- 01:20:16surgical capacity of of a society,
- 01:20:17for instance.
- 01:20:18But maybe it's a large scale question.
- 01:20:20I wouldn't make a blanket prohibition
- 01:20:23in any field against the use of AI.
- 01:20:26It depends on how it's being used.
- 01:20:28What's it being used for?
- 01:20:30All the kinds of considerations I
- 01:20:32brought up and you have brought up.
- 01:20:34There are already ways in
- 01:20:36which AI is improving surgery.
- 01:20:39You've got robotics going on that
- 01:20:42do much better at various aspects
- 01:20:46of very fine manipulations and
- 01:20:48surgery than anybody can do,
- 01:20:50not because people are inept,
- 01:20:52but because you just can't
- 01:20:54do that as a as a person.
- 01:20:56That's one example modeling of
- 01:21:00possible ways you can perform the
- 01:21:02surgery where the surgical site is.
- 01:21:04So they're all kinds of things that
- 01:21:06I think it could be helpful for.
- 01:21:08But that doesn't mean that you should
- 01:21:12take any possible use and use it.
- 01:21:14And it doesn't mean you should
- 01:21:16prohibit any possible use.
- 01:21:17You need to think about and assess
- 01:21:19that as to what are the advantages,
- 01:21:21what are the disadvantages,
- 01:21:23what are the implications?
- 01:21:24How is it going to affect things?
- 01:21:25How does it fit what we do?
- 01:21:26I mean,
- 01:21:27all kinds of considerations
- 01:21:28that I've just listed here.
- 01:21:30Oh, Bonnie, I think you just one more,
- 01:21:32one more quick question and then
- 01:21:33we're we're gonna wrap it up, please.
- 01:21:35I'm not quite sure you got the
- 01:21:36interesting aspect of what he was asking,
- 01:21:38which was the danger of the human
- 01:21:41capacities atrophying, essentially,
- 01:21:42because you've started to rely
- 01:21:44too much on AI. Yes, agreed.
- 01:21:49So what do we do about it? Do, do we?
- 01:21:50How do we protect against that?
- 01:21:52But it's certainly not specifically
- 01:21:53and only in medicine that how
- 01:21:54many of you guys can read a map
- 01:21:55the way you could 10 years ago.
- 01:21:57Which one you can do can do a
- 01:21:58long division when you have a
- 01:21:59calculator that will do it, yes.
- 01:22:01But remember, you're responsible
- 01:22:03for those calculations.
- 01:22:05When you're treating a patient,
- 01:22:06whether or not you're using a computer
- 01:22:10system or a calculator or pen and paper,
- 01:22:13you have to know whether it makes sense.
- 01:22:14If you get something that's
- 01:22:16orders of magnitude off,
- 01:22:16you ought to be able to recognize that.
- 01:22:21Bonnie, I'm going to because our time is up.
- 01:22:23I'm going to ask if you have a final
- 01:22:26thought that I just have a word
- 01:22:27or two at the end. But. But no,
- 01:22:29go ahead with your word. OK
- 01:22:30Well, my word's all about you, kid. I
- 01:22:32I enjoyed all of this.
- 01:22:33This is my word. Lovely.
- 01:22:35Lovely having this discussion and
- 01:22:36I love these discussions, this
- 01:22:38program. I'm so pleased and and
- 01:22:39proud of the people we bring in from
- 01:22:41all over the country and overseas.
- 01:22:43And I hope you guys have
- 01:22:44enjoyed those as well.
- 01:22:45But this is such a wonderful reminder of the
- 01:22:48strength we have right here in New Haven.
- 01:22:50I mean, this has just been a fantastic
- 01:22:52session, but we all learned so much.
- 01:22:53This was a terrific presentation
- 01:22:55and a terrific conversation.
- 01:22:56Thank you for being here.
- 01:22:57And Bonnie especially thank you
- 01:22:58for putting this together and
- 01:23:00given the time to help us move
- 01:23:01along in our understanding of AI.
- 01:23:02Thank you so much.
- 01:23:03And we have various Yale swag here for you.
- 01:23:08Here you go. Put down all sorts of stuff
- 01:23:10in here for you to take home and enjoy.
- 01:23:13All right. All right. Thank you guys.
- 01:23:16Thank you Mark. This was
- 01:23:17really good. This was,
- 01:23:18this was true. Thank you so much.