Skip to Main Content

12-14 MEDG: ChatGPT Has Arrived: Where it Belongs in Medical Education

December 14, 2023
  • 00:04So good to see you everybody.
  • 00:06Welcome to our ChatGPT has arrived and
  • 00:11I really have the honor of introducing 2
  • 00:14faculty and two students who have really
  • 00:17worked hard for the presentation today.
  • 00:20Conrad Sefranek is an MD student
  • 00:22interested in pursuing internal
  • 00:24medicine and research regarding clinical
  • 00:27applications of artificial intelligence.
  • 00:29He graduated with honors from Stanford
  • 00:32with ABS in Computational Biology
  • 00:34and a minor in Management Science
  • 00:36and Engineering at Stanford's.
  • 00:39His research focused on applying
  • 00:42machine learning algorithms to
  • 00:44clinical decision support systems.
  • 00:46And now at Yale,
  • 00:48his research has focused on
  • 00:50how large language models,
  • 00:52such as what he's going to talk about today,
  • 00:54ChatGPT can be applied both in the clinical
  • 00:57practice and to augment medical education.
  • 01:00And Elizabeth Sidiman Aristof is an MDPHD
  • 01:04student intending to study how early life
  • 01:08stress exposure alters neurodevelopment.
  • 01:11She graduated summa *** laude from Princeton.
  • 01:14She had an AB in Spanish and Portuguese
  • 01:17and a certificate in neuroscience.
  • 01:20She worked during her Princeton years,
  • 01:23studying early life stress.
  • 01:24And then she went to Boston Children's
  • 01:27and conducted research on how
  • 01:30children's environments influence
  • 01:32their mental health outcomes.
  • 01:34I'm so thrilled that they're
  • 01:36both with us today.
  • 01:37On this presentation,
  • 01:38David Cartash is a lecturer in the
  • 01:41section of Biomedical Informatics
  • 01:43and Data Science at Yale University.
  • 01:47His undergrad degree in Engineering
  • 01:49Science in Electrical Engineering from
  • 01:51the University of Western Ontario was
  • 01:54specializing in biomedical signals and
  • 01:57systems at the University of Toronto.
  • 01:59He went on to earn a master's of Health
  • 02:02Science in Clinical Engineering,
  • 02:04focusing on the evaluation of the
  • 02:07prediction of hospital decompensation events.
  • 02:10He then obtained his PhD in Medical
  • 02:15Informatics and Complex Systems
  • 02:18Science from Indiana University,
  • 02:21and he's been here working with
  • 02:24us on many of these issues,
  • 02:26Talan Wejesakara,
  • 02:28an assistant professor on the Clinician,
  • 02:31Educator, Scholar Track.
  • 02:32I have to brag a little.
  • 02:34He completed his master's in Health Science,
  • 02:38medical education.
  • 02:39He was part of our first cohort.
  • 02:42He did his undergrad at Duke,
  • 02:44and then he went to University of
  • 02:47Rochester for his MD.
  • 02:48He then came to Yale for his
  • 02:51internship and residency and did a
  • 02:54fellowship in the general internal
  • 02:56medicine that's under Donna Windish.
  • 02:58And I see that she's with us today.
  • 03:02He's had a number of leadership positions.
  • 03:06He's been the Course Director of
  • 03:08Clinical Reasoning since 2017 and he's
  • 03:11also an associate in our Center for
  • 03:15Medical Education providing educator
  • 03:18development on clinical reasoning.
  • 03:21He's also an advisor to the clinical,
  • 03:24the clinician educator distinction path for
  • 03:28residents in the Department of Medicine
  • 03:31and he is the Director of Performance
  • 03:34Improvement for our medical students.
  • 03:36And they've all put a lot of time
  • 03:38and I'm really looking forward
  • 03:40to hearing this presentation.
  • 03:42We have so much to learn.
  • 03:43So thank you very much for
  • 03:45putting this together.
  • 03:46I'll pass it over to you, thalan,
  • 03:50and I will kick it off to
  • 03:53Conrad and Anna Elizabeth.
  • 03:56Hello, everybody.
  • 03:57Thanks so much for coming.
  • 03:59I'm sure many, if not all of
  • 04:01you have heard of Chatcha BT,
  • 04:03probably with a range of excitement to
  • 04:06skepticism and maybe some apprehension.
  • 04:08Today, we'd like to share with you our
  • 04:11perspective as medical students on how
  • 04:13Chatcha BT could and we argue should,
  • 04:15be used to augment medical education.
  • 04:19This is the accreditation
  • 04:21and disclosure slide.
  • 04:22I think there's information in the chat,
  • 04:24but make sure you don't forget that overall,
  • 04:27in this presentation we'll
  • 04:29cover some context and recent
  • 04:30research on large language models,
  • 04:32followed by an exploration of its
  • 04:34integration into medical education.
  • 04:36And the second-half will focus on some
  • 04:38ways that faculty may use these models
  • 04:40to augment their teaching practices.
  • 04:44So this is a very basic neural network.
  • 04:47The idea is that there's an
  • 04:49input layer and an output layer,
  • 04:51and then a hidden layer that
  • 04:53does computation that gets a.
  • 04:57The idea is that you can strengthen or
  • 04:59weaken the connections between particular
  • 05:01nodes based on prior experience,
  • 05:03and this would be represented by
  • 05:05changing the thickness of the arrows.
  • 05:07And what we're doing then is altering
  • 05:10the probability of a particular
  • 05:12output being generated given an input.
  • 05:15A deep neural network is a much more
  • 05:18complicated version of this with
  • 05:20more layers and non linear layers,
  • 05:22and the computation of this goes way
  • 05:24beyond the scope of our presentation.
  • 05:26But it's important to understand
  • 05:28the basics because ChatGPT and
  • 05:30large language models broadly are
  • 05:33a type of deeper neural network.
  • 05:36So we're creating a probability
  • 05:38distribution to understand not
  • 05:40only the definitions of words,
  • 05:42but also their usage in relation
  • 05:45to other words,
  • 05:46and this allows for the
  • 05:48abstraction of information.
  • 05:52O just to clarify, Chat GT stands for
  • 05:55Chat generative retrained Transformer.
  • 06:00So ChatGPT is by no means in isolation.
  • 06:03I think it sparked a lot of people's
  • 06:05interest in large language models,
  • 06:07but it wasn't the 1st and
  • 06:08it's definitely not the last.
  • 06:09There are a ton of these new
  • 06:11models coming out right now,
  • 06:12and they're rapidly expanding
  • 06:14in terms of the size of the
  • 06:16models and and their performance.
  • 06:17This graph is already out of date,
  • 06:21so it's important to note
  • 06:23ChatGPT as I was explaining,
  • 06:25was trained to produce plausible
  • 06:28text using the probability it
  • 06:30is not actually trained on what
  • 06:32is true versus what is not true,
  • 06:35and that means that it is capable of
  • 06:38generating text that is fundamentally wrong.
  • 06:41These have been termed hallucinations,
  • 06:44and the reason that this is possible
  • 06:46is because we're asking it to do next
  • 06:49word prediction to give us a plausible
  • 06:51string of text to answer a question,
  • 06:54but it's not actually doing the cognitive
  • 06:57task of answering that question.
  • 06:59Additionally, ChatGPT does not have
  • 07:01access to data behind paywalls,
  • 07:04so it was trained on a huge volume
  • 07:06of information on the Internet,
  • 07:08but only that was freely available.
  • 07:10The free version of Chat GPTGPT 3.5
  • 07:14was not trained on data past 2021,
  • 07:17and ChatGPT 4 was not trained on
  • 07:21data past 2023, I believe April.
  • 07:24So it's not browsing the Internet
  • 07:26in real time or in our case reading
  • 07:29scientific or medical literature that is
  • 07:32coming out as it immediately comes out.
  • 07:36So for a little bit more context there
  • 07:38was a recent research study actually
  • 07:41conducted at Yale looking at how ChatGPT
  • 07:44this was the original base 3.5 model could
  • 07:47perform on the USMLE exam and to sum it
  • 07:50all up in one phrase it passed step one.
  • 07:54That original base model which spiked
  • 07:56sparked a lot of excitement and I think the
  • 07:59medical community and the more recent update.
  • 08:01So that was equivalent to you could argue
  • 08:03about an M3 able to pass a step one exam.
  • 08:06But these more recent GPT 4 which
  • 08:09is open AIS latest model has been
  • 08:12shown to achieve like greater than
  • 08:14an 85% on all three-step exams.
  • 08:17So this is performing at a pretty
  • 08:20high percentile on these tasks.
  • 08:26So to be clear, in this presentation
  • 08:28we are not proposing using ChatGPT
  • 08:31as the definitive source of truth,
  • 08:34but rather as a tool.
  • 08:35So think of it like a calculator.
  • 08:37For doctors like a Wikipedia,
  • 08:40you still need to understand the
  • 08:43underlying computation of what
  • 08:45you're entering into the calculator,
  • 08:47but you don't necessarily
  • 08:48need to do out all the math.
  • 08:50So we equate using ChatGPT to having
  • 08:52a conversation with a friend.
  • 08:56So then this next part of the presentation,
  • 08:58we're going to highlight a few use cases
  • 09:01that have emerged over the past year of
  • 09:04our using ChatGPT often on a daily basis.
  • 09:07So this first use case is when
  • 09:09reviewing multiple choice exams.
  • 09:11So this is an example of a multiple
  • 09:14choice question that we had on a
  • 09:16midterm and I missed the question,
  • 09:18I wasn't totally sure why and I couldn't
  • 09:20really explain the correct answer.
  • 09:22And unfortunately faculty have so much
  • 09:25on their play and sometimes you want
  • 09:28more of a rationale than is provided.
  • 09:29So this was plugging in that
  • 09:32whole question into Chat CPT.
  • 09:33We were able to explore its response to
  • 09:37try to better understand the question and
  • 09:39you also get this nice verification of
  • 09:41if Chat BT chooses the correct answer,
  • 09:44that's at least some protection against you
  • 09:47knowing its response is an hallucination.
  • 09:51Another use case that Conrad and I
  • 09:54have been discussing is practice cases.
  • 09:56So we have workshops and preclinical
  • 09:58where we go through practice
  • 10:00cases with faculty member,
  • 10:02but there's so many more important
  • 10:04topics that we simply don't have
  • 10:06time to cover in those spaces.
  • 10:08And so we can input into ChatGPT
  • 10:12and ask for workshop cases that we
  • 10:14can then work through on our own.
  • 10:16The advantage of doing this with
  • 10:18ChatGPT over using a clinical cases
  • 10:21textbook or what's available on like
  • 10:24the New England Journal website is
  • 10:25that we can actually interact with it.
  • 10:27So we can ask it to rephrase
  • 10:29things that don't make sense to us.
  • 10:31We can ask for additional details.
  • 10:34But perhaps most importantly,
  • 10:36the cases that it's generating
  • 10:37are not static.
  • 10:38So we could ask it to change one finding.
  • 10:41So for example,
  • 10:42change a physical exam finding or a
  • 10:44lab finding and then redo the case.
  • 10:46And that really helps us hone our
  • 10:50clinical reasoning skills to see
  • 10:52how one element can change the
  • 10:54whole differential or not.
  • 10:56So it's allowing us to practice
  • 10:58applying the pathophysiological
  • 10:59frameworks that we are learning.
  • 11:03We're saving a third use case
  • 11:04for the end of our presentation,
  • 11:06so we'll come back to it.
  • 11:07But this is a more recent update
  • 11:09from Open AI, the development of
  • 11:12these personalized GPT models.
  • 11:14And with these models,
  • 11:15you can custom build a version
  • 11:17of Chat PT by providing relevant
  • 11:19contextual information.
  • 11:20For example,
  • 11:21I'm a second year medical student
  • 11:22and I'm interested in learning
  • 11:24content at the level of death needed
  • 11:26for my upcoming USMLE STEP exam.
  • 11:28Moreover, for my personal learning style,
  • 11:31I like information presented concisely,
  • 11:34using familiar medical abbreviations
  • 11:35and bulleted lists when applicable.
  • 11:39So Conrad is significantly better
  • 11:41at memorizing patterns than I am,
  • 11:43and I really need a framework
  • 11:45for thinking that is Physiology
  • 11:47driven so that I can work my way
  • 11:50up from the basic principles.
  • 11:52So when I was creating my
  • 11:55Med school GPT tutor,
  • 11:56I specifically told it that I wanted
  • 11:59explanations to be longer centered
  • 12:01on pathophysiological frameworks and
  • 12:03building from the underlying mechanisms
  • 12:06up through the presentation of disease.
  • 12:09So this is really great because Conrad
  • 12:11and I are using the same tool ChatGPT 4,
  • 12:14but we are able to tailor it based to
  • 12:16or based on what we specifically need
  • 12:18for our own individual learning styles.
  • 12:23So here are a few other things
  • 12:25that Conrad and I have come up
  • 12:26with and that are suggested in the
  • 12:28literature as to ways that chat.
  • 12:30GPT can be used in medical education,
  • 12:33so it could be used to reduce
  • 12:35the overwhelming landscape of
  • 12:37available resources for students.
  • 12:38It could help create a study plan,
  • 12:41and it could also be a simulated patient.
  • 12:43And we could practice doing a
  • 12:44history taking, typing it out.
  • 12:47So there's lots of potential
  • 12:50taking a step back from the day-to-day
  • 12:52trench of the medical school,
  • 12:53whether we like it or not, large
  • 12:55language models are coming to healthcare.
  • 12:58Here is a smattering of a few of
  • 13:00the recent papers coming out about
  • 13:02potential uses of Chat Chipetia
  • 13:04Medicine and since making these
  • 13:05slides there's been a lot more.
  • 13:07Moreover, Microsoft and Epic announced
  • 13:10and expanded Bring Chat BT into the
  • 13:13electronic health records at Yale.
  • 13:15Whether we like it or not,
  • 13:16we need to be prepared here.
  • 13:19I'll also highlight the in as
  • 13:21part of this expanded partnership,
  • 13:24there's actually a pilot program
  • 13:26that's happening at Y and HH Yell
  • 13:28is one of 10 sites selected for
  • 13:31this pilot program where we can
  • 13:34actually have for the Mychart
  • 13:36inbox messaging from physicians.
  • 13:38They'll there can be so free.
  • 13:40In the specific example,
  • 13:41this patient asked about the poor air
  • 13:44quality and whether it is impacting
  • 13:46or should impact her exercise
  • 13:47routine and whether she can keep
  • 13:49going on walks and chat to BT GB T4,
  • 13:53in this case drafts,
  • 13:55this auto reply that the physician
  • 13:58absolutely needs to read,
  • 13:59review,
  • 14:00likely edit before and of course you
  • 14:02have the option to reject that draft.
  • 14:05But this is coming to Epic very soon.
  • 14:08So I think that's part of the impetus for
  • 14:10us really learning about these models.
  • 14:18So there are many proposed ways in the
  • 14:21literature at large about how ChatGPT
  • 14:23can be used in medical practice.
  • 14:26So Conrad and I have talked about in
  • 14:29particular that ChatGPT can be applied to
  • 14:31reduce the administrative load of physicians.
  • 14:33So for example, it could be
  • 14:36used for insurance filings or
  • 14:38generating discharge instructions.
  • 14:39It can also be used for reducing errors.
  • 14:42So for example,
  • 14:43detecting potential drug drug
  • 14:45interactions and it might actually be
  • 14:47able to reduce the burden on patients.
  • 14:49So for example,
  • 14:50there may be a way to fall or do a
  • 14:53follow up appointment with the chat bot
  • 14:56without actually scheduling a whole
  • 14:58visit for something that's very routine.
  • 15:00But ultimately,
  • 15:01the goal is to use automation
  • 15:04to reduce physician burnout
  • 15:05and to improve patient care.
  • 15:10So as Conrad said, we really need
  • 15:12to be prepared for this inevitable
  • 15:14integration of AI into healthcare.
  • 15:17And I think that part of that is
  • 15:19evaluating very critically how we can
  • 15:22use large language models responsibly,
  • 15:24but also what the limits are of
  • 15:28large language models in AI at large.
  • 15:30So we think questions about hallucinations,
  • 15:33bias chat, GP, TS evaluation of its
  • 15:36own uncertainty and legal and ethical
  • 15:38issues would be great to address in
  • 15:41spaces like professional responsibility,
  • 15:43which is a preclinical course that
  • 15:45goes through some of the major ethical
  • 15:47problems that physicians face.
  • 15:50And we'll add, they did indeed
  • 15:51have a session on this for the
  • 15:53first year students, which we are
  • 15:55really excited to see and we want,
  • 15:56we'd love to see more of those
  • 15:58kind of discussions happening.
  • 16:01So for this part, we're going to show
  • 16:04a short live demo of 1/3 use case.
  • 16:07So this is something that we use when
  • 16:10reviewing our like clinical cases so far
  • 16:12in this first year of didactic curriculum,
  • 16:15but also something that we might use
  • 16:17on the wards after seeing a patient
  • 16:20brainstorming our initial differential,
  • 16:21but then wanting to check back and
  • 16:24see if there's other differentials
  • 16:26that we're potentially missing,
  • 16:27things that we're that we should
  • 16:29should be considering here.
  • 16:30So hopefully this screen share works
  • 16:34and here I'll click submit and chat.
  • 16:39BT can help brainstorm this list and
  • 16:43it gives us initial list linking the
  • 16:45specific symptoms in the presentation to
  • 16:48different potential diagnosis on that list.
  • 16:51But I think the most useful thing here is the
  • 16:53additional follow up questions you can ask.
  • 16:55So I won't go through each one individually,
  • 16:58but things ranging from which physical
  • 17:00exam maneuvers might we want to
  • 17:02consider to further differentiate
  • 17:04between these diagnosis.
  • 17:06What laboratory should we order and
  • 17:08what might, what results might we
  • 17:10expect for each of these diagnosis,
  • 17:12what for their work up,
  • 17:14what treatments, etcetera.
  • 17:15And you know,
  • 17:17in addition to asking across
  • 17:18the board for all of them,
  • 17:20you can hone in and say
  • 17:22for PCOS specifically,
  • 17:23what do I need to be thinking about?
  • 17:25So that interactive component
  • 17:27is really helpful for us.
  • 17:30Yeah, we joke that you're arguing
  • 17:32with ChatGPT and that's actually
  • 17:33the most useful part of it.
  • 17:36And the last note is the
  • 17:37same thing we if we have,
  • 17:39if we're rotating together and
  • 17:40we had a few minutes before
  • 17:42talking to the attending,
  • 17:43we definitely want to go back and
  • 17:45forward talking about our differentials
  • 17:46and seeing if there's things that and
  • 17:48Elizabeth thought of that maybe I missed.
  • 17:50So we kind of think of it as
  • 17:51a conversation like that.
  • 17:54So we know that AI and medicine
  • 17:56sounds really scary and uncertain,
  • 17:59and we have our own
  • 18:00hesitations about it as well.
  • 18:01But the reality really is that we
  • 18:04don't actually have a choice because
  • 18:06this is coming whether we like it or not.
  • 18:09Conrad and I are starting in the hospital
  • 18:12starting in January on our clerkships
  • 18:15and as we mentioned there is ChatGPT
  • 18:18being integrated into EPIC at Yale New Haven.
  • 18:22And so that means literally from the
  • 18:24beginning of our time in the clinical
  • 18:26spaces ChatGPT will be present.
  • 18:29So Yale School of Medicine's mission
  • 18:31statement says that the school is striving
  • 18:34to create leaders in medicine and science.
  • 18:37And we really believe that in
  • 18:39order to be equipped to be future
  • 18:41leaders in medicine and science,
  • 18:43we must be at the forefront of AI and
  • 18:45medicine from the beginning of this movement.
  • 18:47That really is quite inevitable.
  • 18:52So we built in a little
  • 18:55pause midway through this.
  • 18:56We have about 3 to 5 minutes
  • 18:58for one or two questions,
  • 19:00if people have them.
  • 19:01I haven't been watching the chat,
  • 19:03so maybe we want to go there.
  • 19:04The one I'll I'll let you
  • 19:06decide how we open things up.
  • 19:08No, let's open up to the
  • 19:10group if they have any chats.
  • 19:12If you want to unmute yourself
  • 19:13and share your questions,
  • 19:15we'd love to hear them. I
  • 19:18had a question about accuracy again, I want
  • 19:21you to review again how is the accuracy
  • 19:27of the information that you are getting?
  • 19:30Like if you use it as a mentor
  • 19:31in the differential before
  • 19:33you present to the attending.
  • 19:35I mean we're so used to everything being
  • 19:37peer reviewed and
  • 19:39up to date or the
  • 19:40other learning modules by
  • 19:42another human being.
  • 19:43So who is reviewing the chat review,
  • 19:47especially if the Internet they're
  • 19:48looking at is like you know,
  • 19:51over a year old which is what you said
  • 19:53in the beginning. So
  • 19:54how is the accuracy being monitored?
  • 20:00So I think that's a really good
  • 20:02question and something we're always
  • 20:04thinking about as we're using it.
  • 20:06It was trained on the Internet
  • 20:07and that's a scary thought.
  • 20:08We all know we've all read things
  • 20:10that are not true on the Internet.
  • 20:12So that's definitely something
  • 20:14we need to acknowledge and like
  • 20:16understand when we're interpreting
  • 20:18the responses from Chatchibiti.
  • 20:20Never use Chatchibiti as a like a
  • 20:23replacement of up to date for example.
  • 20:25That being said, it is performing.
  • 20:29I would say most of our classmates
  • 20:31right now are not able to hit
  • 20:3385% across all the STEP exams.
  • 20:36I think that's one indicator of the
  • 20:39pretty high accuracy that things
  • 20:41are able to achieve right now.
  • 20:43So it's not,
  • 20:44it's imperfect and it's really important
  • 20:46that people understand that it's imperfect.
  • 20:49But I do think that it can be a
  • 20:51really a really helpful classmate.
  • 20:53I would even argue more helpful than myself.
  • 20:56I know I'm not achieving those scores
  • 20:58on any of the STEP exams right now.
  • 21:00So I would I would recommend
  • 21:02shooting questions to to ChatGPT
  • 21:04maybe before a classmate asked me.
  • 21:07And that's also why we recommend arguing
  • 21:10with it and why we think it's important
  • 21:12that we like start using it in our
  • 21:15classrooms to then be able to learn,
  • 21:18like how you can tell if
  • 21:20it's hallucinating or not,
  • 21:21or sort of learn how we can
  • 21:23implement our own best practices
  • 21:25and safeguards to ensure that we
  • 21:29are like checking it appropriately.
  • 21:31And I think that there isn't necessarily
  • 21:33a best practice established at this point.
  • 21:36And I think that's where Yale
  • 21:38School of Medicine comes in.
  • 21:39That's an opportunity for
  • 21:41us to really lead the way,
  • 21:44right. So we have time maybe for
  • 21:46one more question live, but because
  • 21:48there are so many of us presenting,
  • 21:50we'll have everybody start replying
  • 21:51in the chat as well once we're done.
  • 21:53So Rob, I think you were the
  • 21:55first question that I saw,
  • 21:56if you wouldn't mind us sharing,
  • 21:58I think, I
  • 21:58think Dana actually went before me,
  • 22:00but I I would just wanted to
  • 22:01reply to Peggy, which is to point
  • 22:03out that the idea that students
  • 22:06are using definitive textbooks
  • 22:08only for all of their look
  • 22:09UPS is not really what's
  • 22:11happening, right. So people
  • 22:13look up whatever they find,
  • 22:14and the question is not, is it perfect,
  • 22:16but is it better or at least
  • 22:17as good as the alternatives?
  • 22:19I mean, you know, I, as, you know,
  • 22:21I teach a course and I sit
  • 22:22through courses and I see people
  • 22:24say things that aren't true.
  • 22:26And so, you know, even at Yale,
  • 22:28even with the fancy Yale professors,
  • 22:30people say something that's wrong.
  • 22:32I see things in textbooks that are wrong.
  • 22:34So the idea that they're, you know,
  • 22:36it's like the idea that somebody
  • 22:37on the Internet might be wrong
  • 22:38and we have to correct them.
  • 22:40I think that's really not really not correct.
  • 22:43I think the idea is what the really idea is.
  • 22:46Is it better than the alternative?
  • 22:47That's really the issue.
  • 22:51Dana, I think we can sneak you in.
  • 22:52What's your question?
  • 22:55I was just going to add something
  • 22:57that's probably not a novel thought,
  • 22:59but that the input is only going to be
  • 23:02as good as the clinical skills will allow
  • 23:05insofar as did you ask the right question,
  • 23:08you know what's missing from the STEM?
  • 23:10Did you do the right physical exam?
  • 23:13And it's just to say sometimes when new
  • 23:15technology comes along like radio radiology,
  • 23:17like CAT scans and new learners start
  • 23:19to say the answer is in the CAT scan.
  • 23:21And that's more sensitive and
  • 23:23specific than anything else.
  • 23:24So it's just an opportunity to remind
  • 23:26people to use it as you are both saying,
  • 23:28as a tool and that it can't be a
  • 23:31substitute for clinical skills,
  • 23:35right. We are gonna have
  • 23:37some good discussion today.
  • 23:38Conrad, do you have any last
  • 23:39thoughts before I move on?
  • 23:41One last insertion is just I also
  • 23:43think there is value in practicing
  • 23:46differentials without ChatGPT.
  • 23:48I'm not saying that it should replace
  • 23:50that like independent reasoning.
  • 23:52And I think the parallel analogy is that,
  • 23:54like, students need to learn
  • 23:55their multiplication tables
  • 23:56to be able to do algebra.
  • 23:58And if they just use a calculator and never
  • 24:00learn how to do multiplication tables,
  • 24:02that's going to cause a lot of
  • 24:03problems for them down the line.
  • 24:04So I think it's really important
  • 24:06to develop these skills, like,
  • 24:07independently without these tools as well.
  • 24:10I just think it's a no one would say
  • 24:12no one should use calculators ever.
  • 24:14It's just we have to be thoughtful
  • 24:15about when we use them and and how
  • 24:17they're augmenting our education.
  • 24:20Awesome. Thank you so much.
  • 24:21We're definitely going to have
  • 24:22time at the end where we can
  • 24:24fool some of these questions.
  • 24:25But Conrad and Elizabeth David,
  • 24:27if you want to start answering some
  • 24:28of these other questions in the chat,
  • 24:30that'd be great.
  • 24:31So we've heard about some questions
  • 24:34on student use of ChatGPT.
  • 24:36But how can faculty use ChatGPT
  • 24:39to augment medical education?
  • 24:41And I say this as someone who doesn't
  • 24:43consider themselves particularly tech savvy.
  • 24:45I run a couple courses.
  • 24:47But to be honest, more than anything,
  • 24:49I'm a millennial who has significant fear
  • 24:51of missing out more than anything else.
  • 24:53And so I follow my journals,
  • 24:55we all get our our updates right
  • 24:57in our e-mail.
  • 24:58And I'm seeing in academic medicine,
  • 25:02JGME, medical teacher,
  • 25:03more articles about how we can use this,
  • 25:05even published medical education
  • 25:07articles in JAMA and the New
  • 25:09England Journal of Medicine,
  • 25:10which as all of Y'all scholars
  • 25:12know is very challenging to do.
  • 25:14So clearly it is becoming
  • 25:17mainstream not just in healthcare
  • 25:19but specific to medical education.
  • 25:22So with that in mind and we're
  • 25:23starting to get a feeling about this,
  • 25:25I wanted to see just that,
  • 25:27how are you as faculty and
  • 25:30learners feeling about chat GBD.
  • 25:32So we're putting up a poll there.
  • 25:34You can put as many choices
  • 25:35as I want and I know,
  • 25:36I know there are multiple
  • 25:38choices in there too.
  • 25:39I'll answer this after as well,
  • 25:40but I'll say at least for me,
  • 25:42for the past 6 three to six months,
  • 25:45I've been talking to educators at Yale,
  • 25:48Basic science,
  • 25:49clinical science,
  • 25:49and leaders across the country
  • 25:52around ChatGPT.
  • 25:53And these are some of the adjectives
  • 25:55that I'm seeing and I know they've
  • 25:57all crossed my mind to some degree.
  • 26:00So I want to think about why
  • 26:01we're having these concerns,
  • 26:02some that have already been named.
  • 26:03So the first one is the quality
  • 26:05of the information.
  • 26:06This is by far the biggest concern
  • 26:08that's come up with educators.
  • 26:10And I would certainly agree with Rob that
  • 26:12while it's something to be concerned about,
  • 26:14I think we have to remember like teaching
  • 26:16comes from different sources all the time.
  • 26:19We ask our learners to often study
  • 26:21in groups together and where this
  • 26:22where's that information coming from?
  • 26:24I think another barrier that comes up
  • 26:27is familiarity or comfort with use,
  • 26:29where we hear things like large
  • 26:31language models, predictive texts,
  • 26:33chats,
  • 26:33prompts,
  • 26:34and it feels almost like we're
  • 26:37coding and that's not what ChatGPT is.
  • 26:39But I can understand where that
  • 26:41concern comes from.
  • 26:42And I think a very real one is cost.
  • 26:44There's a literal cost of a monthly
  • 26:46fee if you wanted to get Chachi BT4.
  • 26:48But I think that more than that,
  • 26:49it's the time investment kind of
  • 26:51to the second point of how long
  • 26:53is it going to take to learn.
  • 26:55I think we're getting a lot of
  • 26:58the similar concerns as when the
  • 27:00EHR first came up in my training.
  • 27:03So with that in mind,
  • 27:05I wanted to also ask about how
  • 27:08often you're using ChatGPT.
  • 27:10But before that, David,
  • 27:12I was wondering if you could
  • 27:14show us about our our feelings
  • 27:16on ChatGPT right now. So good.
  • 27:18We have a lot of people who are curious.
  • 27:20That's probably why you're here.
  • 27:21That makes sense.
  • 27:22But I'm kind of surprised we have a lot,
  • 27:25a lot more optimism in the room.
  • 27:27So we can build on that.
  • 27:28That's right.
  • 27:29We're already winning here,
  • 27:31Conrad and Elizabeth and David.
  • 27:32So next poll that's coming up
  • 27:35is how often do you use ChatGPT.
  • 27:38So David,
  • 27:39we wouldn't mind closing this poll and
  • 27:41opening up the next one about frequency.
  • 27:44So are you never,
  • 27:46have you never ever used it?
  • 27:48Are you using it daily or have
  • 27:50you tried it once and that
  • 27:52was more than enough for you.
  • 27:54And as we think about how
  • 27:55often people are using it,
  • 27:56I think it's important to think about
  • 27:58change in the greater perspective
  • 27:59of innovation and technology.
  • 28:01So Everett Rodgers had some work in
  • 28:04the 1960s around the diffusion of innovation,
  • 28:07also known as the technology adoption cycle.
  • 28:10For those of you who are Malcolm
  • 28:12Gladwell enthusiasts and have
  • 28:13read The Tipping Point,
  • 28:15generally we think around getting to
  • 28:17this 16 to 18% of any population.
  • 28:20And after that point,
  • 28:21you're starting to move more and
  • 28:24more towards something that's
  • 28:25just going to happen,
  • 28:26something that's inevitable.
  • 28:27And so I wanted to get David's
  • 28:30take as probably the one with
  • 28:32the most expertise on ChatGPT.
  • 28:34David, how do you feel?
  • 28:35Where do we feel like we are on
  • 28:38ChatGPT in general academic use?
  • 28:40And then specific to medical education,
  • 28:42how many people are using it.
  • 28:45So
  • 28:45I think this is, this is partially
  • 28:48a question of how we see it.
  • 28:50You know, there's a lot of people
  • 28:51who are certainly talking about it.
  • 28:53There's a lot of conversations,
  • 28:55there's a lot of hype,
  • 28:57but and certainly students are
  • 28:59using it quite more frequently
  • 29:01than I ever expected them to be.
  • 29:03But it hasn't quite hit that
  • 29:05public consciousness moment.
  • 29:07So I would estimate, you know,
  • 29:08we're somewhere across the
  • 29:10chasm in the early majority and
  • 29:14specifically in medical education,
  • 29:16I think there's been a lot of hesitancy,
  • 29:18There's been a lot of questions about,
  • 29:21you know,
  • 29:21rightly as we've even seen today about
  • 29:24validity and the the larger problems of,
  • 29:26you know,
  • 29:27accuracy of medical information
  • 29:29amidst the Internet.
  • 29:30But there's been some focus efforts to,
  • 29:33to use it well.
  • 29:34And this is really what we're talking about,
  • 29:36right, Using it well.
  • 29:37And that's going to be the tipping point.
  • 29:40Once that has hit mainstream,
  • 29:43once we're all talking about it
  • 29:44or using it more commonly,
  • 29:46that's when we'll tip to the late
  • 29:48majority from the early majority.
  • 29:51And I think we're really just just
  • 29:52breaching the chasm right now.
  • 29:54All right. So David, what, how often,
  • 29:56where are we in the chasm,
  • 29:58at least as far as our audience right now?
  • 30:00So how's that poll looking?
  • 30:03I love harkening back
  • 30:03to a time of zoom polls.
  • 30:05I haven't done one of these
  • 30:06in a in a minute I would say.
  • 30:08So we have a lot of never users,
  • 30:09maybe a little bit, even some daily users.
  • 30:11I'd say.
  • 30:12I probably personally am a weekly user of it,
  • 30:16probably every time I write up or start
  • 30:19doing some sort of curriculum development.
  • 30:22But I will say that honestly,
  • 30:26if you're using it my my gestalt.
  • 30:28If you're if you use it within
  • 30:30the next like three to six months
  • 30:31to some degree at least,
  • 30:33like a few times,
  • 30:34you probably would be in the early
  • 30:36adopters is my read of the room from
  • 30:38medical schools across the country.
  • 30:40So how should educators use ChatGPT now?
  • 30:43This review was published in the
  • 30:46spring of 2023 in Academic Medicine
  • 30:49on how artificial intelligence
  • 30:51large language models like ChatGPT
  • 30:53should be used in medical education.
  • 30:56They came up with some competencies,
  • 30:58knowledge of AI, ethical implications.
  • 31:00Think about the bias, for example,
  • 31:01of data sets.
  • 31:03It can actually propagate bias
  • 31:04if not reviewed correctly,
  • 31:06how it's affecting clinical encounters
  • 31:09and how we're working to improve it.
  • 31:11But I think it's important
  • 31:12to remember a couple things.
  • 31:14One is that this isn't what every
  • 31:17single medical educator needs to do.
  • 31:19These are institutional values
  • 31:21that we're trying to promote.
  • 31:242 With that in mind,
  • 31:26we have our whole informatics
  • 31:28department here at Yale,
  • 31:30for example,
  • 31:30that David is a part of who are
  • 31:33helping and are going to develop
  • 31:35more and more curricula and
  • 31:36workshops for students and faculty
  • 31:38alike to develop those skills.
  • 31:40So I think honestly it might be
  • 31:43better just to play around with
  • 31:44it ourselves and see how it might
  • 31:46be beneficial to us as educators.
  • 31:48So I'll tell you my story.
  • 31:49It's nothing particularly special,
  • 31:51but when I first started using
  • 31:53ChatGPT was over the summer.
  • 31:55I direct our academic support
  • 31:57program and I had to develop some
  • 32:00cases for SP standardized patients,
  • 32:02for our students who we work with.
  • 32:03And I saw Barbara Hildebrandt on the call.
  • 32:05She gave me a few reminders and it
  • 32:08wasn't until the day before that
  • 32:09I started writing these cases and
  • 32:11I blocked out a few hours and I
  • 32:13was like alright,
  • 32:14I can write them myself or I could
  • 32:16see what ChatGPT can do for me.
  • 32:18So, not feeling great initially,
  • 32:20but I saw what ChatGPT can do.
  • 32:23I put in a general prompt below
  • 32:26this and don't worry, Medicine,
  • 32:29school, medicine, curricula,
  • 32:30clinical skills program.
  • 32:31I didn't show the actual case or I won't,
  • 32:34but I I put in the one of our cases,
  • 32:37one of our representative cases.
  • 32:39Use this new prompt and chief concern
  • 32:41and this is what it spit out to me now.
  • 32:43This is a script for a standardized patient
  • 32:47and it's written very,
  • 32:49very similarly, but on a new case.
  • 32:52So this is talking about knee pain.
  • 32:54It gives us an HPI,
  • 32:55but also things like emotional
  • 32:57context as well social history.
  • 32:59This is more than enough for
  • 33:01an SP to use to play a role.
  • 33:03It even gives us a little bit
  • 33:05of a differential diagnosis.
  • 33:06If you wanted a faculty to have a little bit
  • 33:08of teaching points that they could give.
  • 33:10Obviously they could add their own expertise,
  • 33:12but I saw that and at
  • 33:13least for case development,
  • 33:14as someone who's involved in clinical skills,
  • 33:16I was like,
  • 33:17this is this is pretty good enough
  • 33:19for me to pay 20 bucks a month
  • 33:21now for a ChatGPT for.
  • 33:22Since then I've used a few times
  • 33:25creating facilitator guides.
  • 33:26Literally after this we'll be doing
  • 33:28a clinical reasoning session with
  • 33:30one-on-one consultations between
  • 33:32faculty and tutors, I think.
  • 33:34And Elizabeth has a slot coming up soon too.
  • 33:37And we use ChatGPT for the facilitator guide.
  • 33:40I've used it to update curricula.
  • 33:42For example,
  • 33:43we have a session on Bayesian
  • 33:45reasoning in the physical exam.
  • 33:47I got feedback from learners on ways
  • 33:49to improve and we use that to create
  • 33:52new cases and learning objectives.
  • 33:54I've used the prayer feedback on the wards,
  • 33:57had a couple struggling learners
  • 33:59and had to deliver feedback,
  • 34:00and I put in like the pearls format,
  • 34:03the go ask, tell, ask model.
  • 34:06And just so I didn't have to
  • 34:08perseverate on the hard discussion,
  • 34:10I kind of had an idea of what might
  • 34:12be some things that I could say.
  • 34:14We've also had within our
  • 34:16clinical reasoning course,
  • 34:17one of our associate course directors,
  • 34:19Darius developed some exercises
  • 34:21around CHAP GBT 5 prompts and that
  • 34:24I think our students found OK,
  • 34:27Conrad and Elizabeth were in that class.
  • 34:29I think it was like tolerable.
  • 34:30But I think more than anything I
  • 34:32realized from that is that we don't have
  • 34:35to teach and use ChatGPT all the time.
  • 34:37Our students are going to
  • 34:38get that information.
  • 34:39It's more important for us to,
  • 34:40for them to learn somewhere along
  • 34:42the lines about the concerns,
  • 34:44earns and ways to use it and
  • 34:46then for us to use it ourselves.
  • 34:48And so with that in mind,
  • 34:49when it comes to teaching and these
  • 34:52are some general categories from
  • 34:54a review or that came out or a
  • 34:56perspective piece that came out in
  • 34:58that kind of medicine in August from Bascard.
  • 35:00And some opportunities for teaching,
  • 35:02like I mentioned around
  • 35:04curriculum development are there,
  • 35:05But also evaluation.
  • 35:06We can look at our curricula and
  • 35:08eventually with large language files,
  • 35:11if not already, say like, hey,
  • 35:12are we meeting our goals,
  • 35:14our educational program objectives?
  • 35:15We can put our content in that
  • 35:18and evaluate it.
  • 35:19One conversation I've had a lot with
  • 35:20David is around teaching framework.
  • 35:22So we teach knowledge and medical education,
  • 35:25but then eventually they have
  • 35:26to practice and there's this
  • 35:28huge space in between about how
  • 35:31students synthesize and connect
  • 35:32information. And that is a place
  • 35:35where ChatGPT could really help.
  • 35:36How do you interact with information?
  • 35:38How can you make things iterative
  • 35:39in ways that we can't do as faculty
  • 35:42because we simply aren't there
  • 35:43all the time with students now?
  • 35:45So the concerns are there,
  • 35:47the quality of the resources,
  • 35:48information for sure.
  • 35:49But I think a sneaky concern that
  • 35:52comes up that I think is a good
  • 35:54concern for us is that chat DVD
  • 35:56can replace a lot of the stuff
  • 35:58that we're already doing and that
  • 36:01might increase the need for more in
  • 36:03person faculty tasks like where's
  • 36:05the real value in medical education?
  • 36:07Is gonna come up more often than not.
  • 36:10There's certainly gonna be
  • 36:12opportunities in assessment,
  • 36:13creating items which will talk about rubrics,
  • 36:16providing feedback both in answers,
  • 36:18keys or as Anne Elizabeth talked about,
  • 36:20and more about personalized tutor mode.
  • 36:22The way that you won't that you
  • 36:24might have it in office hours if
  • 36:26it was only you and the faculty
  • 36:27member going back and forth,
  • 36:29and so certainly there
  • 36:31are opportunities there.
  • 36:32The concerns as well arise between,
  • 36:35for example,
  • 36:36if you have open-ended answers,
  • 36:38open-ended questions with answers to,
  • 36:40they're going to have to be reviewed
  • 36:42carefully for AI generation for the
  • 36:45ability to to answer multiple choice
  • 36:47questions clearly at a very high level.
  • 36:50In my opinion,
  • 36:51it's probably functioning somewhere
  • 36:52between the level of a sub I and
  • 36:55an intern at medical knowledge
  • 36:56at least ChatGPT 4,
  • 36:57so strengthening proctoring
  • 36:58is going to come up.
  • 36:59It hasn't done great with automating scoring,
  • 37:02at least in our early studies,
  • 37:03but I imagine it'll get better
  • 37:06as our new technology improves.
  • 37:08But I will say this,
  • 37:09it is already starting to be used
  • 37:12in programs of study for item
  • 37:14assessment or item development.
  • 37:16So this is from the University
  • 37:18of Colorado's medical school.
  • 37:19It's a paper that came out a medical
  • 37:21teacher I think maybe two months ago.
  • 37:23And for their five week reproductive
  • 37:25course they had a total of 290 questions
  • 37:28that were included across quizzes,
  • 37:31across their qualifier,
  • 37:32their final exams And they used
  • 37:35ChatGPT to create some of those items,
  • 37:37about 10% of them.
  • 37:38And they used this flow chart,
  • 37:41which isn't as serious as it seems and
  • 37:43honestly not as detailed as it is.
  • 37:45If you look at that paper,
  • 37:46the prompt,
  • 37:47this third purple box was very limited.
  • 37:50It was give us a four sentence
  • 37:53question on a reproductive system.
  • 37:54So it didn't even specify in ways that
  • 37:57I did like This is how I want to format,
  • 37:59this is the specific topic within that.
  • 38:02And yet they still used it.
  • 38:04And when all was said and done,
  • 38:06they found that these questions
  • 38:08were similar and difficulty
  • 38:09granted after expert review,
  • 38:11assessment team review.
  • 38:12And they found that the difficulty
  • 38:16was similar and arguably more
  • 38:19importantly the discrimination,
  • 38:20the discrimination ability was similar.
  • 38:22So between high performers
  • 38:24and lower performers,
  • 38:26the question performed similarly
  • 38:28and faculty found it easy to
  • 38:30use and saved a lot of time in
  • 38:32their question development process.
  • 38:34There are other ways that ChatGPT will
  • 38:36certainly be used in medical education,
  • 38:38probably beyond the scope of what we
  • 38:40talked about in research authorship.
  • 38:42I know journals are still trying
  • 38:43to figure that out right now.
  • 38:45Also in admissions as well.
  • 38:48I think there is something to be said.
  • 38:49I know there might be an initial concern
  • 38:51that people are writing their personal
  • 38:53statements with artificial intelligence,
  • 38:54but I think there's a counter argument
  • 38:56that actually can be good for reducing
  • 38:59disparities for a lot of students who
  • 39:01don't have the resources that maybe some
  • 39:03universities have as far preparing their
  • 39:06health profession students for application.
  • 39:08So there's certainly other ways,
  • 39:10but at least we focused here
  • 39:12on teaching and assessment.
  • 39:14So some takeaways,
  • 39:15at least from my use and my reading of the
  • 39:18room when it comes to ChatGPT is First off.
  • 39:21Again, you're still in the early adopters,
  • 39:23just try it.
  • 39:25Remember that you don't need
  • 39:27to be a super user right away.
  • 39:30Also remember that what you're developing
  • 39:32should be really seen as first steps,
  • 39:36not final products,
  • 39:37and that way you can use your
  • 39:39time and expertise to edit and
  • 39:42get higher quality products.
  • 39:44Another thing I mentioned reinforcing
  • 39:45with Dave and I talked about,
  • 39:47it can be used for synthesis and framework,
  • 39:49less so for primary information.
  • 39:51I don't think that students are
  • 39:53using it for primary information
  • 39:54and we asked them about it.
  • 39:56That's not how they said
  • 39:57they're using as well.
  • 39:58In fact,
  • 39:58that's where they've been telling us.
  • 39:59They've been skeptical too.
  • 40:01It will force us to find where
  • 40:04the limited resources are where
  • 40:06students aren't able to use it.
  • 40:08So just keep your eyes open for updates,
  • 40:10but ask for help when available.
  • 40:13And in this sense,
  • 40:14a quick shout out to David,
  • 40:16whose e-mail will be at the end,
  • 40:18and other colleagues as far as keep your
  • 40:21eyes open for opportunities to learn more.
  • 40:25So with that in mind,
  • 40:25I wanted to move on to just some examples
  • 40:28so you can kind of see what I'm doing.
  • 40:30I know I did some screenshots,
  • 40:31but in addition to that I wanted to
  • 40:33show you some examples of some of the
  • 40:36chats that I did and see if maybe you
  • 40:38can take a look at these and say you know,
  • 40:41how helpful would it be.
  • 40:43So in this example,
  • 40:44a basic science lecture on
  • 40:46water soluble vitamins,
  • 40:48your vitamin BS, vitamin CS.
  • 40:51If you wanted to help it
  • 40:53create a lecture outline,
  • 40:54this might be an example of what it
  • 40:56would look like. So I give a prompt.
  • 40:59I'm a faculty member.
  • 41:01I have 60 minutes.
  • 41:02I'm mostly didactic,
  • 41:03little bit large group,
  • 41:05but also I was able to put in a
  • 41:08Yale School of Medicine educational
  • 41:10program objective so that could
  • 41:12help me make sure I'm hitting the
  • 41:15right points and spit out this.
  • 41:18So I'll wait for
  • 41:19510 seconds to let you skim the links.
  • 41:22David, would you mind throwing the link
  • 41:24to this in the chat for our faculty?
  • 41:27Take a look at another time.
  • 41:30But at least for me, looking at this,
  • 41:32I would give this maybe like a two 2 1/2.
  • 41:35I don't really do lectures anymore,
  • 41:37but if I had to make 60 slides on something,
  • 41:40it would be helpful start to
  • 41:41figure out what I should probably
  • 41:43think about putting into it,
  • 41:44maybe with my own resources.
  • 41:47Now let's say I wanted to add some cases.
  • 41:50Let's write some cases, let's not.
  • 41:51I don't want them to be too long.
  • 41:53I want to chance always told me,
  • 41:54God, I got to make sure I keep
  • 41:56the words on my slides limited.
  • 41:57So let's only make it 250 words.
  • 41:59Let's see what a case might look like
  • 42:02on thiamine deficiency or B6 deficiency.
  • 42:05And the first paragraph is a case,
  • 42:08and for me and this is a person
  • 42:09who writes cases,
  • 42:10who talks to colleagues around the country,
  • 42:13for example,
  • 42:14the clinical problem solvers at UCSF.
  • 42:16They've told me that you don't
  • 42:18really need to have a perfect case.
  • 42:20You can tweak cases,
  • 42:21you can adjust them for what
  • 42:23you need to know.
  • 42:24This is more than enough initially,
  • 42:26and I can add and change some things to
  • 42:28make it more representative based off
  • 42:30of my own expertise with this topic.
  • 42:32So I'd give this maybe a three
  • 42:36then let's say I want to write a question.
  • 42:38Now, one way that we can get
  • 42:40faculty to buying in general,
  • 42:41to medication, to medication,
  • 42:43to our educational content is that it can
  • 42:47relate to them and what their needs are.
  • 42:50And so one thing you could ask it is hey,
  • 42:52like let's make this format,
  • 42:54USM LE1 style, USML E2 style.
  • 42:57And you can at least say that
  • 42:59to your students as well.
  • 43:00And that might make them more interested
  • 43:02and increase their motivation for
  • 43:04doing some of those questions.
  • 43:05And even more than that,
  • 43:07you can see it can create tables for you.
  • 43:09Now I certainly would adjust
  • 43:11and edit this table,
  • 43:13but it would give me a good start
  • 43:14if I wanted to have an answer.
  • 43:16Key of illness scripts for
  • 43:18any of these given vitamins on
  • 43:21deficiency specifically range from
  • 43:23epidemiology to diagnostic testing.
  • 43:25So all of this,
  • 43:26it was created in less than 5 minutes.
  • 43:29It might have taken a little
  • 43:31bit longer if I had it truly
  • 43:33searched through all of like USM,
  • 43:35LE1 parameters through their database,
  • 43:37but you can see that it certainly is a star.
  • 43:42And so as we're nearing the end,
  • 43:43I do want to move on to a large
  • 43:45room discussion 'cause I know
  • 43:47the chat has been pretty active.
  • 43:48So we'll skip our second case
  • 43:51though David put one in the
  • 43:53chat of a clinical example.
  • 43:55But for a large group discussion,
  • 43:58I'd love to hear from y'all about
  • 44:00seeing from the students and
  • 44:01now the educator's perspective.
  • 44:03What are your impressions of
  • 44:04ChatGPT for creating content?
  • 44:06Would you use it in your own
  • 44:08educational practice now?
  • 44:09And what are some next steps
  • 44:11that you think EOS Good Medicine
  • 44:14should be looking at for ChatGPT?
  • 44:16So what do y'all think?
  • 44:26So thalan, I love Gary's question,
  • 44:28he says, Curious how much of
  • 44:30this talk was AI generated?
  • 44:33So I would say our cases obviously
  • 44:35are that we plugged into Chat
  • 44:37chi BT but no, at least none of
  • 44:39my slides where I had to use,
  • 44:40you know, this one myself.
  • 44:42But I don't know Conrad and Elizabeth,
  • 44:44Did you use Chat Chi T
  • 44:46for any of your slides?
  • 44:48They didn't make any of the
  • 44:49slides one place we did.
  • 44:50I had a few conversations back and
  • 44:52forth when, like brainstorming
  • 44:53ideas on how Chatship team might
  • 44:55be used in medical education.
  • 44:58So that was one place where I
  • 44:59don't know if it added any ideas,
  • 45:01but was reassuring that we'd thought of
  • 45:03most of the ideas that came up with.
  • 45:06We thought of them already.
  • 45:07So that was one place we used it.
  • 45:09But good question.
  • 45:11And I will say Comrade and
  • 45:13Elizabeth have been giving this
  • 45:14talk in a couple iterations.
  • 45:16So this is not the first time if
  • 45:18it seems too smooth to be real.
  • 45:20It's just their practice and expertise.
  • 45:25Awesome. Allison, you have a question.
  • 45:29Hi, Salon, thank you
  • 45:30so much. And and to all
  • 45:32of our presenters today,
  • 45:33this is just really,
  • 45:34really interesting and I
  • 45:36appreciate the time it took
  • 45:38to prepare this. I do have a
  • 45:40question Thelan being on the
  • 45:42continuing professional development
  • 45:43side and thinking of our faculty
  • 45:46and I've used it once one of I was a one,
  • 45:51but there's so much opportunity,
  • 45:52but I think it's a vulnerability
  • 45:54of saying I don't know how to do this and
  • 45:58how to best prepare. So from Acme side,
  • 46:01how can we provide those opportunities
  • 46:04to teach our faculty and to be
  • 46:08vulnerable and to say you don't know and
  • 46:11still find a a best practice
  • 46:13for integrating this in, even
  • 46:15if it's very slowly and
  • 46:17will take some time. Yeah,
  • 46:19we're figuring this out
  • 46:20on your own by the way.
  • 46:21If you have questions for specific people,
  • 46:23feel free to like put that in the comment
  • 46:26box with your question or ask initially.
  • 46:28I'll take this one.
  • 46:29I guess initially, Allison,
  • 46:30I'll say that we're all learning together
  • 46:35and I think it might be a little bit
  • 46:37bumpy as we create more content.
  • 46:39I've seen other institution workshops,
  • 46:41they'll have these like playgrounds
  • 46:43where they'll give an open space and
  • 46:45all faculty can kind of use whatever
  • 46:47tools that they're working on at
  • 46:49any given time and ask questions
  • 46:51to someone who's hosting it.
  • 46:53And some people will just like take
  • 46:55those links and go offline and use it.
  • 46:56And that's OK.
  • 46:58But I think just having low
  • 47:00requirements or expectations and
  • 47:02just having a lot of opportunities
  • 47:04is just the best way to do it.
  • 47:06David,
  • 47:06I know you've like thought about this a lot.
  • 47:09What do you think would be a
  • 47:10good approach for helping our
  • 47:12faculty learn about ChatGPT and
  • 47:13other like large language models?
  • 47:16So first point, do you mind
  • 47:18taking down the slides so we can
  • 47:19see everybody in in panorama.
  • 47:21The the easy answer to that is everybody's
  • 47:26talking about it from the Porvu Center,
  • 47:28from us to from the larger groups
  • 47:31within the Faculty of Arts and
  • 47:33Science and there's been a lot of
  • 47:35communities built as a product of that.
  • 47:37The the I'll, I'll say the section
  • 47:39for Biomedical Informatics and
  • 47:41Data Science should be your first
  • 47:42stop because we're here to help.
  • 47:45You know myself as well as others
  • 47:47are have been using large language
  • 47:49models before chat EBT ever existed.
  • 47:52So if you would like that kind of
  • 47:55opinion or that kind of expertise
  • 47:57to join you please reach out.
  • 47:59I am happy to help.
  • 48:00I believe the slide deck will be
  • 48:02shared at the end my phone number and
  • 48:04e-mail or at the end of everything.
  • 48:07Very happy to help.
  • 48:09While I'm talking,
  • 48:10let me quickly also mention that within
  • 48:13the larger strategic partnership between
  • 48:15the health system and the School of Medicine,
  • 48:18there are capacities to use
  • 48:22LLMS with Phi with, you know,
  • 48:26proprietary resources within the School
  • 48:28of Medicine that are being explored.
  • 48:32You know,
  • 48:32like I can't even begin to name the whole
  • 48:34list of people involved in this process,
  • 48:37but through Biomedical
  • 48:39Informatics and Data science,
  • 48:40through the YNHH,
  • 48:42Medical Information Officers and
  • 48:45Digital Transformation Solutions folks,
  • 48:47it is happening.
  • 48:48If you want to be involved,
  • 48:50if you want to be connected,
  • 48:51reach out and I will do my best
  • 48:53to get you to the right person.
  • 48:56So for questions, maybe
  • 48:58we can go to Dana next.
  • 49:03Thanks for the presentation.
  • 49:05With respect to challenges and applications,
  • 49:07a quick thing that came to mind
  • 49:09as a recent clerkship director and
  • 49:11trying to embrace more holistic
  • 49:13inclusion of social determinants of
  • 49:15health into our clinical teaching.
  • 49:17And I know Sheila Gupta,
  • 49:18who's working on our masters right now,
  • 49:20is working on this whole project of
  • 49:22how to develop faculty to include
  • 49:23this kind of thing in their content.
  • 49:25And it seems so massive to train
  • 49:27everyone and to like find all
  • 49:29the articles to put them.
  • 49:30So it just seems like a really good
  • 49:33potential application if you can train
  • 49:35it to for this individual content,
  • 49:38can you incorporate and incorporate
  • 49:40social determinism of health?
  • 49:42It would really help a lot of faculty to
  • 49:45some of those EP OS and curricular demands.
  • 49:47So I love that possibility.
  • 49:50And then to that point,
  • 49:51we just had a case that like was
  • 49:54written by faculty specialists that
  • 49:56had some issues as far as like bias and
  • 50:00sigmatizing language goes and actually
  • 50:02to one hand developing case to ChatGPT
  • 50:04allows you to spend more time looking
  • 50:06at that instead of creating the content.
  • 50:08But also like Conrad and
  • 50:10and Elizabeth have said,
  • 50:12we can ask follow up questions
  • 50:13to it and for example,
  • 50:15input like watch out for these types
  • 50:17of terms, these type of themes.
  • 50:18So it can actually be helpful in that
  • 50:20sense too if you're on the lookout
  • 50:25Gary. So one
  • 50:27of the questions that I see coming
  • 50:29up most often regarding AI is sort
  • 50:31of the ethics of it and I I deeply
  • 50:34recognize that were very early in the
  • 50:36world of AI to to really substantively
  • 50:38we have those conversations.
  • 50:40But I'm wondering how each of you
  • 50:43are managing transparency with this.
  • 50:45And I mean, I tried writing a letter of
  • 50:48recommendation this year using ChatGPT
  • 50:49and I just there was like a a deep
  • 50:52almost dirty feeling doing it and so
  • 50:55how how are you approaching transparency
  • 50:58with with this
  • 51:01Conrad man? Elizabeth what
  • 51:01do you think about that?
  • 51:06Definitely something.
  • 51:07But I think there needs to be
  • 51:09better guidelines about like I
  • 51:10think people need to think more
  • 51:12about how we're addressing this.
  • 51:13The first thing that comes to
  • 51:15mind is like research literature.
  • 51:16Different journals are dealing with
  • 51:18this in very ad hoc different ways,
  • 51:21and a recent submission I did,
  • 51:24they asked for every ChatGPT log I'd
  • 51:26had related to the content of the
  • 51:29research that we were submitting.
  • 51:31And that was a really extensive.
  • 51:33I'd gone back and forward a lot with
  • 51:35ChatGPT asking questions about like
  • 51:37how to analyse statistical results.
  • 51:40It's really good at statistics too,
  • 51:41again needing to be corroborated,
  • 51:44but it is a complex question and I think
  • 51:47better guidelines need to be developed.
  • 51:50I really agree with that.
  • 51:51But I think, I mean Yale is one
  • 51:53of the most prestigious academic
  • 51:56institutions in the world.
  • 51:58And I think like enforcing that above
  • 52:02all we stand for academic integrity
  • 52:04and personal integrity as physicians.
  • 52:07And I think you know,
  • 52:10essentially encouraging students
  • 52:13to be to to act in a way that is
  • 52:17consistent with those values and
  • 52:19also to take personal responsibility
  • 52:21and then having very severe
  • 52:23consequences if they do not,
  • 52:25I think makes a lot of sense.
  • 52:27But definitely having like very explicit
  • 52:31policies as to how you want things
  • 52:33cited and how you want transparency
  • 52:36to be specified in anything that
  • 52:38you submit is is really important.
  • 52:43Another example that I can think
  • 52:45of is when I was shadowing a few
  • 52:48probably months ago now in the CCU
  • 52:50I was following along and as AI
  • 52:52was a first year medical student I
  • 52:53think at the time and really didn't
  • 52:55have that much of A grasp and had
  • 52:57a million questions and asked the
  • 52:59attending that I was following if it's
  • 53:01OK to be on my phone a little bit
  • 53:04in the background Googling things.
  • 53:06Well I asked.
  • 53:07I admitted to using ChatGPT and
  • 53:09was actually he was very interested
  • 53:12and enthusiastic and read a lot of
  • 53:14the responses and was very happy
  • 53:16that I was using it.
  • 53:17And I I think it for a lot of those
  • 53:19little questions that come up it
  • 53:22can be really helpful augmenting
  • 53:24again recognizing not as a source
  • 53:26of truth etcetera.
  • 53:28But that kind of transparency
  • 53:29I think can be helpful,
  • 53:31but is definitely a nervous step to
  • 53:34admit to your professor or attending.
  • 53:37But maybe that's a tool you're
  • 53:40using and being cognizant of how
  • 53:41you're using it is very important.
  • 53:43But that being said, like on the PhD
  • 53:46side of what I'm doing here at Yale,
  • 53:48so much of that type of learning is
  • 53:51harnessing the tools that we have to
  • 53:54produce the best research possible.
  • 53:57And then when we write out our methods,
  • 53:59you know, we talk about
  • 54:00what tools we used and why.
  • 54:02And I think that this,
  • 54:05the applications of ChatGPT into
  • 54:07medicine is almost pushing us to
  • 54:10be more scientific in the way that
  • 54:13we are thinking because we have
  • 54:16to be using the methods correctly
  • 54:19essentially in order to produce
  • 54:21the appropriate output. I
  • 54:23think that's a great, sorry.
  • 54:25I think that's a great segue into
  • 54:27to highlight one of the questions in
  • 54:29the chat about mitigating systemic
  • 54:31bias where ultimately at the end of
  • 54:33the day we may not be able to right.
  • 54:35This is essentially something running
  • 54:38on the statistical collective
  • 54:40intelligence of the Internet and
  • 54:42related sources and then generating
  • 54:43what that likely is going to say.
  • 54:46It's going to be biased.
  • 54:48We can't avoid it, but we can recognize it.
  • 54:50We can identify it.
  • 54:52We can work with scientific rigor
  • 54:55to mitigate it.
  • 54:56And if we are transparent and
  • 54:59we do the best we can,
  • 55:01then we are doing the best we can.
  • 55:05And I think to an Elizabeth's point,
  • 55:07you know, this is what leadership looks like.
  • 55:11The last thing I don't want to
  • 55:12run right up to the last minute,
  • 55:14but I do think it's really
  • 55:15important to address the chat
  • 55:17comment about HIPAA compliance.
  • 55:19And that is another thing that really
  • 55:20needs to be emphasized to students,
  • 55:22to residents, to everyone in the
  • 55:24hospital is that with the public
  • 55:26available versions of Chat BT
  • 55:28that you use through a browser,
  • 55:30no patient like private health
  • 55:32information should ever go into
  • 55:35those models because open AI for
  • 55:37those public models specifically
  • 55:39that are on the web browser,
  • 55:41they do sometimes use what you
  • 55:43input to further improve the models.
  • 55:45So there is definitely risk
  • 55:47of like Phi leakage.
  • 55:49That's why the hospital has
  • 55:50blocked those public models.
  • 55:52And I think it's important for the
  • 55:54hospital to explain why that's happening.
  • 55:56And I also think that there can be
  • 55:58future solutions as David mentioned,
  • 56:00for research purposes right now there
  • 56:02is HIPAA compliant workarounds that
  • 56:04never embed Phi into these models.
  • 56:07And I think there could be interactive
  • 56:09like mobile uses as well in the future,
  • 56:12but we don't have those yet and it's
  • 56:13important that people understand
  • 56:15the limitations in the meantime.
  • 56:17So I think on that note,
  • 56:19it's about time to wrap.
  • 56:20Let's give you like a minute
  • 56:21or two before whatever you
  • 56:23have to do in the afternoon.
  • 56:24Thank you for joining us to
  • 56:26talk more about ChatGPT.
  • 56:28This is going to be an
  • 56:30ongoing discussion for many,
  • 56:31many years as we're in medical education.
  • 56:34So feel free to reach out.
  • 56:36Thanks again and have a great afternoon.
  • 56:38This has been MEDG.