Skip to Main Content

Yale Psychiatry Grand Rounds: February 11, 2022

 .

Yale Psychiatry Grand Rounds: February 11, 2022

February 11, 2022

"Prejudice Reduction: Progress and Challenges"

Elizabeth L. Paluck, PhD, Professor, Princeton University

ID
7446

Transcript

  • 00:00I'm very pleased to introduce
  • 00:03Professor Betsy Levy Paluck to give
  • 00:06the annual lecture for the Division
  • 00:07of Prevention and Community Research.
  • 00:09Doctor Pollack is professor and acting chair
  • 00:12of psychology at Princeton University,
  • 00:14where she is also professor of public
  • 00:16affairs and the faculty associate
  • 00:18in the Department of Politics.
  • 00:19She also serves as deputy director of the
  • 00:22common Treatment Center for Behavioral
  • 00:24Science and Policy at Princeton.
  • 00:26After completing her undergraduate and
  • 00:28doctoral degrees in psychology at Yale,
  • 00:30she spent two years as an Academy
  • 00:32scholar at Harvard.
  • 00:33Before joining the faculty at Princeton,
  • 00:35where she has remained ever since.
  • 00:38Doctor Pollack is a leading researcher on
  • 00:40prejudice and intergroup conflict reduction.
  • 00:42Conducting her research using field
  • 00:44experiments in the US and Africa,
  • 00:46and focusing on mass media and
  • 00:49interpersonal communication.
  • 00:50Much of our research has examined
  • 00:52social norms and group influence
  • 00:54through peers and role models.
  • 00:56Narrative communication,
  • 00:57and group discussion as a vehicle
  • 01:00for behavior change her research
  • 01:02on social norms and social networks
  • 01:04has identified strategies for
  • 01:06reducing discrimination,
  • 01:07as well as bullying and ethnic conflict.
  • 01:09In various contexts,
  • 01:10including American high schools
  • 01:12and post conflict, Rwanda.
  • 01:14Her translation of theories
  • 01:16and social psychology,
  • 01:18which are usually developed and
  • 01:20tested in laboratory experiments
  • 01:22into real-world interventions and
  • 01:23randomized control of field experiments,
  • 01:25has resulted in discoveries of
  • 01:27new ways to positively influence
  • 01:30individual and group behavior.
  • 01:32Doctor Pollock is the author of
  • 01:34numerous publications and the
  • 01:35recipient of many honors and awards,
  • 01:37including an early career award from
  • 01:39the Society for the Study of Peace,
  • 01:42Conflict,
  • 01:42and Violence with the American
  • 01:45Psychological Association and in two
  • 01:472017 selection as a MacArthur Fellow,
  • 01:49which involves recognition as one
  • 01:52of 24 talented individuals showing
  • 01:54extraordinary originality and
  • 01:56dedication in creative pursuits.
  • 01:58As activists, artists,
  • 02:00scholars or scientists and receiving
  • 02:02an unrestricted fellowship.
  • 02:04For five years from the Kathryn D
  • 02:06from the John D and Catherine T.
  • 02:08MacArthur foundation.
  • 02:09In addition to MacArthur funding, Dr.
  • 02:12Pollack has been funded by the
  • 02:14National Science Foundation,
  • 02:15the Canadian Institute for Advanced Research,
  • 02:17and numerous foundations,
  • 02:19including WT, Grant, Russell,
  • 02:21Sage Spencer and Guggenheim.
  • 02:23We are very pleased to have her talk
  • 02:25with us today about prejudice reduction,
  • 02:27progress and challenges.
  • 02:29Doctor Powell.
  • 02:32Thank you so much for that welcome.
  • 02:34I'm really excited to be here. You know I am.
  • 02:39When about two years ago when I completed
  • 02:42this manuscript that I'm going to
  • 02:44talk to you about today, I hit send.
  • 02:47We went into lockdown a few days
  • 02:49later and a few days later,
  • 02:52after that I had a baby and I really had
  • 02:55no idea that I would be able to share
  • 02:58this paper with as many audiences as I have.
  • 03:01I, I just couldn't expect that
  • 03:03zoom would allow me to do this,
  • 03:05and that's a silver lining
  • 03:07for me like this right now.
  • 03:09There's a silver lining.
  • 03:10Despite all of the isolation
  • 03:13that came afterward.
  • 03:14Because this is a paper that
  • 03:16I feel so strongly about,
  • 03:18and to me it it it's a circling back of
  • 03:25a broad look that I took at the prejudice
  • 03:27reduction field very early in my career.
  • 03:29And I'll tell you about about why
  • 03:31that is so I'm I'm also just really
  • 03:33excited to be here because this
  • 03:35is the first psychiatry audience
  • 03:36I've presented to on the topic.
  • 03:38And that's just really meaningful for me.
  • 03:41I love presenting to lots
  • 03:43of different audiences,
  • 03:44and this paper has been taken out
  • 03:46over the course of the pandemic
  • 03:48to many different audiences,
  • 03:49and I've had really interesting
  • 03:51feedback from them,
  • 03:52and I really look forward
  • 03:53to your feedback as well.
  • 03:54So thanks again for having me.
  • 03:57I want to start out by acknowledging
  • 03:59my Co authors on the paper.
  • 04:01Ronnie Parrot,
  • 04:02who is on the left hand side and is a
  • 04:05professor at Hebrew University Chelsea Clark,
  • 04:07who is an absolute star graduate student.
  • 04:11So watch that space for
  • 04:13Chelsea and Donald Green,
  • 04:15who was my mentor at Yale and he was in the
  • 04:18political science department at the time.
  • 04:21You know, of course,
  • 04:22he's my forever mentor now and we
  • 04:25recently as I as I just mentioned,
  • 04:27so we we.
  • 04:28Published this paper called the
  • 04:29Prejudice Reduction Progress and
  • 04:31Challenges in the annual review of
  • 04:33psychology and in this paper we're
  • 04:36asking what works to reduce prejudice,
  • 04:39and this was a paper that I wrote.
  • 04:41This was a a calling back to a
  • 04:44previous paper that I wrote with Don
  • 04:45my advisor as a graduate student,
  • 04:47and it was inspired when Don was
  • 04:50teaching a course on political
  • 04:52intolerance and hate crime,
  • 04:54and I was his TA from the psychology
  • 04:56department and.
  • 04:57He said to me, you know,
  • 04:59we're going to spend a long time
  • 05:01going over the the political and
  • 05:04social and economic conditions that
  • 05:07lead to intolerance to to hate crime.
  • 05:09It's a.
  • 05:10It's a fairly stern and depressing slog,
  • 05:13and I like to end on, you know,
  • 05:16some notes of promise.
  • 05:17So what do we know about about
  • 05:19reducing prejudice and and it's
  • 05:22behavioral expressions from psychology?
  • 05:24I said no problem, I've got you.
  • 05:27You know our field was.
  • 05:28You know in part founded around this,
  • 05:30there is such a such a big
  • 05:32field prejudice reduction.
  • 05:33I'll come back with some really
  • 05:35convincing papers and he said,
  • 05:36you know, to be convincing.
  • 05:37I would like them to have
  • 05:39some causal identification.
  • 05:40I would like them, you know,
  • 05:42to have some behavioral measurement,
  • 05:44not just attitudinal.
  • 05:45I said got it, and I found that
  • 05:48the search was a lot harder than
  • 05:49I thought it would be that I had.
  • 05:51Maybe some unfounded optimism
  • 05:53in in what we had found so far
  • 05:55in the psychology literature.
  • 05:57And so I just kept searching and
  • 05:59it ended up in this mega paper,
  • 06:01in which we essentially hoovered
  • 06:03up all that there was not just
  • 06:05in the field of psychology,
  • 06:06but in all of the social sciences.
  • 06:08The the policy literature,
  • 06:10the grey literature.
  • 06:11So we were searching for,
  • 06:12published and unpublished,
  • 06:13and even across the biomedical sciences,
  • 06:15to ask you know, what, what,
  • 06:17what's being done in many,
  • 06:18many different fields.
  • 06:19It resulted in this paper,
  • 06:21which was really a narrative review
  • 06:24because there were so many studies out there.
  • 06:27That were quite descriptive and
  • 06:29descriptive of really interesting
  • 06:31ideas on how to reduce prejudice,
  • 06:33but the evidence and this is
  • 06:35what we argue in this paper,
  • 06:38was depressingly thin,
  • 06:39particularly for this question
  • 06:41that I've put here up at the top.
  • 06:44What works to reduce prejudice?
  • 06:45What?
  • 06:46What can cause reductions in attitudes,
  • 06:49emotions, norms,
  • 06:51or particularly behaviors?
  • 06:52And so the call that we put
  • 06:55in this paper was.
  • 06:57To see more experimentation that
  • 06:59could tell us something about
  • 07:01causal inference and to see it
  • 07:03out in the field to see whether
  • 07:05behavior could change in that rich
  • 07:07thicket of reality out there.
  • 07:09And and you know,
  • 07:11particularly whether these
  • 07:12programs could be successful in
  • 07:14the places they were intended for,
  • 07:17so we were less interested in the in
  • 07:20the proof of concept as small lab studies.
  • 07:23But that that was the call that we put out,
  • 07:24and so now it's 12 years later and I
  • 07:27have a lab of my own with students of my own,
  • 07:30and it almost became a personality test
  • 07:32for us as we started discussing this,
  • 07:34Ronnie was the postdoc in my lab at the time.
  • 07:36Chelsea was a new graduate
  • 07:38student and we said,
  • 07:39you know,
  • 07:39do we think the field has changed since this
  • 07:42this call to arms in the in the last paper?
  • 07:44I shouldn't say call to arms it's senses.
  • 07:46You know this Clarion call or
  • 07:48this this this encouragement,
  • 07:49right?
  • 07:49So we decided to review the
  • 07:51literature again going back that.
  • 07:53The past number of years since
  • 07:55we published that paper.
  • 07:56OK,
  • 07:56so that's my that's the
  • 07:57history of this paper,
  • 07:59and we decided to ask three
  • 08:01questions in this paper.
  • 08:02So what's happened in the last dozen years?
  • 08:05What are the average effects
  • 08:06of the interventions?
  • 08:07So in my last paper in our last paper,
  • 08:10we used studies that were
  • 08:12purely qualitative all the
  • 08:14way to purely quantitative,
  • 08:16and we decided to really
  • 08:18focus on studies this time
  • 08:20that had quantitative measures so that we
  • 08:23could have calculate an average effect.
  • 08:25We also decided to focus on experiments
  • 08:27so that we could really focus in on
  • 08:29what do we know about causal effects.
  • 08:32And then finally the third
  • 08:33question that this paper poses.
  • 08:35Can social science answer the public
  • 08:37call to reduce prejudice in the world?
  • 08:39And since I just gave you the timeline of
  • 08:42of when I hit send on the final draft,
  • 08:45you know that the the the public
  • 08:48school got a lot louder.
  • 08:50As as we were publishing this paper,
  • 08:53and so the stakes seemed to be appropriately
  • 08:56high for asking this question of
  • 08:58how do we answer that call, right?
  • 09:01I have the social sciences and the
  • 09:03biomedical sciences responded in
  • 09:05a way that we're here for them.
  • 09:07With effective programming and
  • 09:09effective social science,
  • 09:11social change theories when when
  • 09:13it's actually being asked for.
  • 09:16OK, so I'm just going to give you the
  • 09:18answers to all of these questions right
  • 09:19now and then I'll spend the rest of my time.
  • 09:21Defending them with data
  • 09:23and see what you think.
  • 09:24So the first answer there's been an
  • 09:27uptick in prejudice reduction research.
  • 09:29Some of this research is just
  • 09:31going to become classic.
  • 09:32It's going to be taught
  • 09:33in the social sciences,
  • 09:34not just in classes that focus
  • 09:38specifically on on conflict or prejudice
  • 09:41or or intolerance and hate crimes.
  • 09:44But everywhere because these
  • 09:46papers are just really these,
  • 09:47these research projects are just phenomenal,
  • 09:49and I'm going to tell you
  • 09:50about some of them today.
  • 09:52However,
  • 09:52the modal research is very different
  • 09:55from these future classics.
  • 09:58See, that's partly mathematical.
  • 09:59Of course,
  • 09:59the mode has to be different
  • 10:01from the outliers,
  • 10:01but there's ways that I want to
  • 10:04characterize modal research and even
  • 10:06the majority of the research that
  • 10:08really give me and us great pause in
  • 10:10recommending some of these strategies,
  • 10:12and in particular some of the most
  • 10:15popular prejudice reduction ideas out
  • 10:17there treasured by the lay public,
  • 10:20but also scientists alike.
  • 10:21We actually don't find that
  • 10:23much support for them, right?
  • 10:24So I'm going to be presenting
  • 10:26on what I want to.
  • 10:28Underline as an absence of evidence
  • 10:30is not evidence of absence, right?
  • 10:32So we're not finding as much
  • 10:34backlash effects as just gaps.
  • 10:36And then finally,
  • 10:37and I think this is the sourest note,
  • 10:40the most rigorous research in this review
  • 10:43shows very small reductions in prejudice.
  • 10:46OK?
  • 10:48So from there and in the paper,
  • 10:51we ask well what should the next
  • 10:53generation of prejudice reduction
  • 10:55research look like based on this?
  • 10:57And then in the most editorial
  • 10:59touch to this talk I want to
  • 11:01speak at the end on whether we're
  • 11:02using the right model of change,
  • 11:04it's going to be a very evidence driven talk.
  • 11:07This is just going to be my opinion and
  • 11:08and our opinion as a as an author group,
  • 11:11so I'll get to that at the end.
  • 11:13And that's where I especially
  • 11:15invite your feedback.
  • 11:16OK, so now the evidence
  • 11:17for what I just argued.
  • 11:19First, there is an increase in
  • 11:21most types of prejudice reduction
  • 11:22research that black line at the
  • 11:24top shows any kind of research.
  • 11:26This is broken down by methodology,
  • 11:28but I could do this for in a
  • 11:29number of different ways and I'll
  • 11:31start showing you that later.
  • 11:33You see that it's mostly driven
  • 11:34by studies that are taking place
  • 11:36in the scientific lab or online
  • 11:38and online studies really take
  • 11:40off people running studies on.
  • 11:43M Chirk and prolific doing online
  • 11:47brief interventions that Orange
  • 11:49Line measuring field experiments is
  • 11:51toggling down at the bottom about,
  • 11:53you know, 023 or 4 experiments per year.
  • 12:01We used this is a a Prisma diagram just
  • 12:04to show you that how did we come about?
  • 12:08All of these studies?
  • 12:09Well it's a very transparent process.
  • 12:11We we followed biomedical meta analytic
  • 12:13standards and you can see all of our studies.
  • 12:16All of our code and you know Prisma
  • 12:18diagrams that show you how we make
  • 12:20decisions about inclusion in our study
  • 12:21throughout the entire thing it's up
  • 12:23on dataverse for any of you who are
  • 12:26interested but just to take you briefly
  • 12:28and narratively through that process.
  • 12:30We searched through five separate
  • 12:31databases to find all of the studies
  • 12:34that are in this meta analysis.
  • 12:36Four of them are open to all,
  • 12:37one is a proprietary Princeton
  • 12:40based text based search.
  • 12:41Although we be include all of the keywords
  • 12:44that we use in the text based search,
  • 12:46these searches led to 16,000 results,
  • 12:49non unique results that we spent
  • 12:52one robust summer reviewing in full
  • 12:55with a team of Masters students
  • 12:57and then we we identified.
  • 13:00About 1800 that were eligible,
  • 13:02the Pi team read all of those
  • 13:04and the criteria we had in mind.
  • 13:06Very broad for the definition of prejudice.
  • 13:09So we just described it as animus
  • 13:11and it could be expressed in terms of
  • 13:14an emotion and attitude of belief,
  • 13:16a behavior we do not include sexism
  • 13:19in this review in part just following
  • 13:22the previous review standard.
  • 13:24I can talk a little bit about that.
  • 13:26You know why we made that decision in
  • 13:28the first review? You know in part.
  • 13:30I can say it's because these literatures
  • 13:32are are surprisingly separate in
  • 13:34their theoretical orientations,
  • 13:35but but I can also answer more
  • 13:37questions about that.
  • 13:38We didn't review what is now being
  • 13:42called affective polarization,
  • 13:43and that is to say partisan bias.
  • 13:45Bias and prejudice between Democrats and
  • 13:48Republicans. But that's a literature.
  • 13:49As you probably know, that's on the rise.
  • 13:51So keep your eye on that literature.
  • 13:53We also left out,
  • 13:54you know some of the toy prejudices
  • 13:56that that psychologists,
  • 13:58social psychologists like to play with.
  • 13:59So you're not going to see any studies.
  • 14:00One year at testing prejudice from
  • 14:03you know Ohio State students versus
  • 14:06Michigan students.
  • 14:07OK, and they all had to be experimental,
  • 14:10so there had to be a random assignment
  • 14:12to treatment and control or placebo
  • 14:13so that we could understand the
  • 14:15direction of effects and and causality.
  • 14:17OK,
  • 14:17so in our final sample we have 300
  • 14:19and manuscripts and 418 experiments
  • 14:21and we coded all of them.
  • 14:23I want to tell you that if any
  • 14:25of you use Instagram,
  • 14:27this is like using the like gently.
  • 14:31Filter the most flattering
  • 14:32filter on the field,
  • 14:33because when we took the quantitative
  • 14:35data from these studies,
  • 14:37we just let authors suggest to us what
  • 14:39were their top most important outcomes
  • 14:41so we would look at their abstract
  • 14:44to see what they were featuring.
  • 14:46As we know authors like to feature their
  • 14:49most promising findings in the abstract,
  • 14:52so we let the authors tell us.
  • 14:55Does this mean that we might not
  • 14:56be capturing some negative effects?
  • 14:58It might so you know any.
  • 15:01If anything, this meta analysis is
  • 15:03giving you maybe a positive bias
  • 15:05on on the fields, but that's how
  • 15:08we chose quantitative findings.
  • 15:09We chose 5 up to five outcomes
  • 15:12from each study and average them.
  • 15:14And then we also quoted them
  • 15:16qualitatively so that we can tell you
  • 15:18about the theoretical orientations
  • 15:20of these papers and other types of
  • 15:23features of their interventions and
  • 15:25their study populations and so forth.
  • 15:28OK, next points that I now next
  • 15:30conclusion that I argued to you several
  • 15:32of them are destined to become classics.
  • 15:35Why, what?
  • 15:36What do we like about studies so number one,
  • 15:39their interventions are robust,
  • 15:42their interventions that you could take
  • 15:44to a community or an organization.
  • 15:47A student group tomorrow if you wanted to.
  • 15:50What that means is that they typically
  • 15:53are aware of and have anticipated social,
  • 15:56sometimes even political as well
  • 15:58as psychological processes.
  • 16:00In terms of trying to affect participants,
  • 16:03and they're well described.
  • 16:06They also use extremely robust methods,
  • 16:08right?
  • 16:08And so here's where I'm going to preview
  • 16:11some complaints that we have later.
  • 16:13These studies,
  • 16:13by contrast to many others,
  • 16:15have very large sample sizes.
  • 16:17They typically measure behavior
  • 16:19as well as attitudes or beliefs.
  • 16:22There's a lot of attention to randomization.
  • 16:24People are dropping out.
  • 16:25They use appropriate econometric
  • 16:27methods to address attrition.
  • 16:30They've pre registered their their tests,
  • 16:34and they use open data.
  • 16:37So here's an interesting thing.
  • 16:39We identify this group of studies
  • 16:41so that we just think are absolutely
  • 16:43terrific and they actually come from
  • 16:45very different theoretical backgrounds
  • 16:47and approaches, which is quite nice,
  • 16:50despite the fact that they're all
  • 16:51very different from one another.
  • 16:53They all have promising positive
  • 16:56but very small.
  • 16:59Sexercises OK,
  • 16:59so that's something to flag right away.
  • 17:02Well,
  • 17:02let me tell you about a few.
  • 17:04One thing I want to tell you right
  • 17:06away is that almost every single
  • 17:08one was led by a doctoral student
  • 17:10that is just so amazing.
  • 17:12I mean,
  • 17:13we faculty have no excuse on the one hand,
  • 17:16the future is bright.
  • 17:17On the other hand, wow,
  • 17:18they're they're leading the way.
  • 17:21Here are two studies that I want
  • 17:23to tell you about that tests
  • 17:24the effect of contacts, right?
  • 17:26So the vaunted contact hypothesis, in which.
  • 17:29And contact under certain
  • 17:31conditions like cooperation,
  • 17:32equal status, a common purpose and.
  • 17:36You know, sort of authorities.
  • 17:38Legitimization can reduce prejudice
  • 17:41and and one treasured site for
  • 17:46testing contact is team sports.
  • 17:50There's a lot of programming around this.
  • 17:52First of all,
  • 17:53I've done a meta analysis of contact
  • 17:55in the past and I want to tell you
  • 17:57that we know a lot less about it.
  • 17:59Its effects than we think we do,
  • 18:02especially for policy outcomes,
  • 18:04but these two studies stepped in to
  • 18:06fill some of those gaps quite beautifully.
  • 18:09Salma Moussa,
  • 18:10in northern Iraq,
  • 18:11organized a Soccer League between
  • 18:14Christian and Muslim players.
  • 18:16This isn't a Christian area,
  • 18:18and so Muslims are the minority in that area,
  • 18:21and they were randomized to be
  • 18:23either on your team as a Christian
  • 18:25player or on the opposite team so
  • 18:27that she could test the idea of.
  • 18:29Is it about contact?
  • 18:30Do you need to be cooperating on the same
  • 18:33team and then tested various outcomes,
  • 18:35not only prejudice towards Muslims,
  • 18:38but also ideas about policy and
  • 18:41inclusiveness in policy and behaviors
  • 18:43such as would you use a voucher given
  • 18:47to you to eat at a Muslim restaurant
  • 18:49in a Muslim neighborhood following
  • 18:50your experience on the Soccer League?
  • 18:55Completely independently,
  • 18:55but at the same time and and
  • 18:58equally brilliantly Matlow a
  • 19:00doctoral student in economics went
  • 19:02to India and organized cricket
  • 19:03leagues for a low and high caste.
  • 19:06Men doing the much the same
  • 19:09kind of randomization.
  • 19:10Also looking at things like would
  • 19:14would men actually punish themselves
  • 19:16and so it's very economic style.
  • 19:18Measurement of behavior.
  • 19:20Would men actually sort of
  • 19:22inefficiently trade if they were given?
  • 19:24Resources such as new sandals,
  • 19:27mismatched sandals.
  • 19:28Would they go to the lengths of trading
  • 19:31with a low caste person to to get the
  • 19:34right the right match for their sandal?
  • 19:37Or would they discriminate and
  • 19:38and get less good matches by by
  • 19:41trading only with higher caste men.
  • 19:42So really convincing interesting outcomes,
  • 19:46behavioral outcomes and so forth.
  • 19:49OK, another study that we just
  • 19:51want to highlight as really robust.
  • 19:54This study was done by a political
  • 19:57science doctoral student who was
  • 19:59interested in theories of confrontation.
  • 20:01So does confronting a person
  • 20:04who has done something racist?
  • 20:07Does it work and does it work
  • 20:08over the long term?
  • 20:09And this was done on Twitter back when bots
  • 20:13were not as widely recognized, a phenomenon.
  • 20:16So it was, you know.
  • 20:18Probably not something that can repeat.
  • 20:21Can be repeated in the exact same form today,
  • 20:24but it's an elegant 2 by two design
  • 20:26in which first the student went
  • 20:29online to find white men identified
  • 20:31as white men by their avatars.
  • 20:33Who would use the N word as a
  • 20:36racial slur in the past week?
  • 20:38Those those actual Twitter users
  • 20:41were then randomly assigned to
  • 20:43have a tweet tweeted at them by an
  • 20:46avatar who was either identified
  • 20:48by their picture as black or white
  • 20:51and as high or low status.
  • 20:52I put in quotes as identified
  • 20:55by their number of followers.
  • 20:57So either having a large following or
  • 21:00small following and the tweet that
  • 21:02that was sent to them from one of
  • 21:04these users essentially said to them,
  • 21:06you know you have to watch what
  • 21:07you're saying. That's an incredibly.
  • 21:10Hurtful word and that's you know.
  • 21:12And so they they confronted them
  • 21:13on the use of the word,
  • 21:14and then monger actually just follows
  • 21:17those users to see if they essentially
  • 21:20recidivate if they use the word
  • 21:22again and how long into the future
  • 21:25does the effect of that confrontation last.
  • 21:28She got long term measurement.
  • 21:30Also open data.
  • 21:31Everything was pre registered.
  • 21:33This is quite heroic experiment.
  • 21:38Diversity training and
  • 21:40online diversity training.
  • 21:41Short online diversity training
  • 21:43used by a global corporation,
  • 21:45and so done with the kind of
  • 21:48sample that you really want.
  • 21:51Enormous sample,
  • 21:51one of the only samples in this meta
  • 21:54analysis that could so convincingly
  • 21:56analyze heterogeneous effects.
  • 21:57That is,
  • 21:58for whom did this online training work?
  • 22:00If anyone,
  • 22:01and I'm not telling you the
  • 22:02results of all of these studies,
  • 22:04'cause we'll get to that later.
  • 22:05But one thing I just want to flag
  • 22:07here is that some of what you suspect
  • 22:10about diversity training maybe
  • 22:12seem to come true in this study,
  • 22:13which is that first of all part of
  • 22:16the heroism of the study was that
  • 22:19the authors actually had to create.
  • 22:21Behavioral opportunities to see whether
  • 22:26the employees of this company would
  • 22:27take them up following the diversity
  • 22:29training 'cause it turned out the
  • 22:31company was not tracking the types of
  • 22:33behaviors that we might be interested
  • 22:35in and couldn't for legal reasons,
  • 22:37share others like promotion and
  • 22:39and and retention and so forth.
  • 22:42So what they did was they measured
  • 22:44following this online training.
  • 22:45Did employees sign up for mentoring
  • 22:49hour to mentor underrepresented?
  • 22:52Members of the of underrepresented
  • 22:55employees of the company,
  • 22:58which for this company included
  • 23:01women and and underrepresented
  • 23:04in Minoritized employees.
  • 23:07What they find is it's actually women and
  • 23:10minoritized employees who are the ones
  • 23:13who sign up for this this coffee hour.
  • 23:15This mentoring hour to mentor
  • 23:18others following this training.
  • 23:20OK, so those are a few of what we called
  • 23:22in the paper are landmark studies.
  • 23:24There are more and I really encourage
  • 23:25you to go and read about them,
  • 23:27'cause they're just real feats,
  • 23:29creative and and and brave.
  • 23:32But the modal type of research
  • 23:33is very different,
  • 23:34and that's what I argued
  • 23:35to you in the beginning.
  • 23:36Why?
  • 23:36Well,
  • 23:37first of all,
  • 23:37let me just describe all the
  • 23:39different kinds of research that
  • 23:41is in this meta analysis and these
  • 23:45are categories that we created.
  • 23:48There are types of interventions.
  • 23:51In the meta analysis that you
  • 23:53know fall under some of these,
  • 23:55you know buckets these categories
  • 23:57that we created and a couple of
  • 23:59things I want to point out to you.
  • 24:00First of all,
  • 24:02that top bar that that represents a
  • 24:05third of all activity in prejudice
  • 24:07reduction over the past dozen years is
  • 24:10called extended and imaginary contact.
  • 24:12Now if you don't know what that is,
  • 24:14it's it's an intellectual development in
  • 24:16the in the study of intergroup contact,
  • 24:19that's quite stunning.
  • 24:21Basically the the the move that's
  • 24:23been made in that literature over the
  • 24:25past dozen years has been to say we
  • 24:28have so much research on interpersonal
  • 24:30contact and you can see down below.
  • 24:32We think that face to face contact
  • 24:34might not even be necessary anymore.
  • 24:36So extended contact is knowing that
  • 24:39one of your friends and your social
  • 24:42network has a contact with an out
  • 24:45group member and imaginary contact is
  • 24:48exactly as it sounds it involves for example.
  • 24:51White American being asked to
  • 24:53imagine a conversation or contact
  • 24:55with a black American right,
  • 24:57so it's it's quite a notable thing that you
  • 25:01know that's where the intervention is gone.
  • 25:04You know, in the last dozen years,
  • 25:06and it takes up 33% of all
  • 25:08of our of our energies,
  • 25:09or roughly 33%.
  • 25:10Now the zoom bar is on my on my X axis,
  • 25:14so I can't see the next category.
  • 25:17Is cognitive and emotional training,
  • 25:19and this includes many different types
  • 25:22of psychological techniques for trying
  • 25:25to encourage people to self regulate
  • 25:27and rethink their biases.
  • 25:30So cognitive training includes things
  • 25:32like trying to suppress implicit.
  • 25:34In automatic biases,
  • 25:36emotional training addresses ways too.
  • 25:41To regulate emotions like guilt
  • 25:44or fear or shame in with respect
  • 25:48to thinking about group members,
  • 25:50social categorization is the next
  • 25:52most frequent kind of intervention
  • 25:54and it involves trying to
  • 25:56rethink group boundaries and so,
  • 25:58thinking about subordinate categories
  • 26:00instead of dividing us up into fractional
  • 26:03minority versus majority groups
  • 26:05or dominant versus oppressed, etc.
  • 26:08OK, so those are the most
  • 26:10common types of interventions.
  • 26:12And what I really want to point out here,
  • 26:14and I'm mindful that I'm saying this to a
  • 26:16group of people in a psychiatry department,
  • 26:18is that the energy of the past dozen years
  • 26:20has all been about prejudice reduction
  • 26:23through mentalizing through our mental lives.
  • 26:26OK, and I don't present
  • 26:28that to you as good or bad.
  • 26:30But those three top categories were
  • 26:33all about an individual strategy
  • 26:36for rethinking or imagining.
  • 26:41You know conditions under which there
  • 26:43should be less bias and prejudice.
  • 26:45OK, going along with this,
  • 26:48we had a category that we coded for each
  • 26:51and every study that we called light touch,
  • 26:53which is a bit of policy jargon,
  • 26:55but we defined it really clearly.
  • 26:56We said light touch means that it's brief,
  • 26:59cheap and easy to implement this intervention
  • 27:02brief meaning 15 minutes or less.
  • 27:04So we had even a very clear
  • 27:06definition of that that characterized
  • 27:0976% of all interventions.
  • 27:11That were studied or the past dozen years OK,
  • 27:15and then the final way in which the
  • 27:17modal type of research is very different.
  • 27:19I already previewed in a sort of
  • 27:23complaints earlier by by praising
  • 27:26the the landmark studies.
  • 27:28It's it's quite the mirror,
  • 27:29opposite in the mode for the rest of these
  • 27:32studies there are very small sample sizes.
  • 27:35There's a great amount of attrition
  • 27:37people dropping out of the intervention,
  • 27:39but the analysts will simply.
  • 27:42Compare who's in the control group in
  • 27:43the treatment group afterward and say,
  • 27:45well, they're they're roughly consistent,
  • 27:47so we'll just proceed with our
  • 27:49usual analysis.
  • 27:49There's a lot of cluster randomization,
  • 27:51but analysis at the individual level which
  • 27:54which throws off the standard errors,
  • 27:57and there's a great deal of lack of
  • 27:59transparency, so not sharing data,
  • 28:02not preregistration.
  • 28:04OK so I'm just showing you this.
  • 28:06It's it's quite small for you.
  • 28:08I just want to characterize the
  • 28:10rest of the sample when I'm.
  • 28:12You know making these global descriptions?
  • 28:14I want you to know that the types of
  • 28:16outcomes that they're also measuring.
  • 28:17The vast majority are still
  • 28:19explicit attitudes or beliefs.
  • 28:21There's still very little measurement
  • 28:24of behavior, empathy, emotion,
  • 28:26the types of prejudice, race,
  • 28:28and ethnicity are still the most,
  • 28:30and I say still because I did
  • 28:33the previous meta analysis,
  • 28:34so some of this is just really staying
  • 28:36consistent with with past work,
  • 28:38race and ethnicity appropriately
  • 28:39are are still the most studied.
  • 28:42Ability is is also studied a great deal.
  • 28:47Prejudice against disabled people
  • 28:49and then
  • 28:50a Sergeant category is prejudice against
  • 28:53immigrants, asylum seekers and refugees
  • 28:56for quite understandable historical.
  • 28:57You know trends in the past few years
  • 28:59and then intervention studies these.
  • 29:02These interventions that are being
  • 29:04studied are still predominantly taking
  • 29:06place on college campuses and now online.
  • 29:08For example on Amazon in truck.
  • 29:11OK, what's the average effect?
  • 29:13So let's just get right into it.
  • 29:15The average effect is d =
  • 29:17.3 standard error of .02.
  • 29:20For people who are not fluent indeed.
  • 29:22And even though I I traffic in them,
  • 29:24I like to try to make it make sense to me.
  • 29:26This is the equivalent of taking
  • 29:29someone who is rating and outgroup,
  • 29:32say, black Americans.
  • 29:33If it's a white American participant
  • 29:36group rating that group on a feeling
  • 29:38thermometer that ranges from zero to 100,
  • 29:41with 0 being.
  • 29:41Very cold 100 being I feel
  • 29:43very warmly toward this group.
  • 29:45Let's take someone with kind of what
  • 29:46we might call a mild prejudice.
  • 29:48So 10 points below the neutral point.
  • 29:50This would.
  • 29:51This would move them on average to a 48 OK.
  • 29:54So it's worth taking a beat to
  • 29:57consider whether we think that
  • 29:59that's impressive or not.
  • 30:01You know,
  • 30:02in some ways you know I've placed
  • 30:03it below the neutral point,
  • 30:05and so now we're thinking.
  • 30:06Well, there's.
  • 30:07There's still basically just at neutral.
  • 30:08Is that good?
  • 30:09But recall that I've also told
  • 30:11you that the predominant share
  • 30:14of this this group of of studies
  • 30:18only lasts for the intervention,
  • 30:20only lasts for 15 minutes or or less,
  • 30:22and so maybe that's quite impressive.
  • 30:24Actually, for these brief.
  • 30:26Light touch interventions.
  • 30:27OK,
  • 30:29but now I want to dig a little more
  • 30:32deeply into this overall average effect,
  • 30:34right?
  • 30:35And I'm going to do that with
  • 30:36my hands above the table.
  • 30:37I'm not going to actually make
  • 30:39any judgments of these papers,
  • 30:41although I have many of them
  • 30:43and happy to share.
  • 30:44But I'm going to do this in a
  • 30:46way that's quite mechanical.
  • 30:47I'm just going to divide up all
  • 30:50of these studies into Quintiles,
  • 30:52and the quintiles will be determined by
  • 30:54how many people are in your treatment group.
  • 30:56OK,
  • 30:56so this is just about a sample
  • 30:58size analysis and what you see
  • 31:00is that the bottom quintile,
  • 31:01when I do this,
  • 31:02this is the the column over here on
  • 31:05the left hand side is 25 people or fewer.
  • 31:08That's really small.
  • 31:10The top quintile is 78 people or more.
  • 31:15Also quite small for.
  • 31:18For an intervention study right
  • 31:20for randomized controlled trial.
  • 31:22Now the other thing that we need
  • 31:23to look at is the effect size,
  • 31:25which is over here.
  • 31:26I hope you can see my pointer
  • 31:29in in the fourth column,
  • 31:31the effect size if there's publication
  • 31:34bias will track the the the sample size,
  • 31:37and in fact we find that's
  • 31:39exactly what it does,
  • 31:40that it's these tiny tiny little
  • 31:42studies that should not have any
  • 31:44power to find an effect size
  • 31:46that find a wopping effect size.
  • 31:48Right, and of course,
  • 31:50that's why they've been published
  • 31:52because they found a significant effect.
  • 31:55But that effect size that they find is
  • 31:57double the average, and if you look at,
  • 31:59you know the more high quality studies,
  • 32:01just according to their sample size.
  • 32:03I'm not making any other judgments
  • 32:04that we could maybe argue about
  • 32:06with respect to its measurement,
  • 32:07or what it prioritizes or the intervention
  • 32:09approach theoretical approach.
  • 32:10Just looking at sample size,
  • 32:12that average effect is a .18.
  • 32:15OK, so that's a lot smaller.
  • 32:18I'm just showing you how this.
  • 32:19Actually moves in a linear direction.
  • 32:21This is the average effect sizes
  • 32:24and as the sample gets larger,
  • 32:28so too does the effect size, right?
  • 32:30So I'm just showing you the
  • 32:32same thing in graphical form.
  • 32:34So what is it?
  • 32:35A .18 mean that average effects for
  • 32:37the studies with the the greatest
  • 32:39sample size that would move people
  • 32:42who were raiding black Americans.
  • 32:44The people, not people,
  • 32:46white participants who were rating
  • 32:48black Americans at a 40 they would
  • 32:50be moved to a 44 on average.
  • 32:53OK, so still positive movement.
  • 32:57Half of the effects of above
  • 32:59in some respects, right?
  • 33:01Still worthwhile to pause to see
  • 33:03whether we're pleased with that,
  • 33:05and we could have an interesting
  • 33:06argument about it, right? OK.
  • 33:10So what's the average effect now?
  • 33:11I'm just going to show you how
  • 33:13the average effect moves down
  • 33:16for every type of intervention.
  • 33:18OK,
  • 33:18and so you can find your favorite
  • 33:21approach potentially on that.
  • 33:22On that axis there.
  • 33:23So all of these different intervention
  • 33:26buckets from entertainment to peer
  • 33:28influence to multicultural education,
  • 33:30diversity trainings,
  • 33:31interpersonal contact, right?
  • 33:33So all of these effect sizes
  • 33:35you should have seen them just
  • 33:37jump down approaching 0 when we
  • 33:39limit it to studies with larger.
  • 33:41Sample size is OK, so I just I.
  • 33:45I'll do that again,
  • 33:47jump down right and one thing that I want
  • 33:50you to pay close attention to is the ends.
  • 33:52How many studies we have.
  • 33:54The enlisted after the type of intervention
  • 33:58here and I want you to drop your eyes down
  • 34:01to the bottom of this figure and see that.
  • 34:04In considering how many diversity trainings
  • 34:07has been studied with experimental methods,
  • 34:10once you restrict it to the
  • 34:12sample size being 70 or more,
  • 34:15there's only two.
  • 34:16Two studies of diversity training
  • 34:18in the last dozen years to try
  • 34:20to understand the effects of dice
  • 34:23diversity training that really have
  • 34:25any shot at uncovering a well powered,
  • 34:27you know, reliable effect.
  • 34:29OK,
  • 34:29and I've already told you about one of them,
  • 34:31OK?
  • 34:33So this brings us to my other arguments,
  • 34:36which was that some of the most
  • 34:38popular prejudice reduction
  • 34:38ideas are not well supported.
  • 34:40Well, diversity training is one of them.
  • 34:41We are not at this stage in the
  • 34:45scientific literature able to
  • 34:47recommend the public that diversity
  • 34:49trainings are an effective measure.
  • 34:51Now this is not to say that I
  • 34:53do not support having any kind
  • 34:55of training in a workplace or
  • 34:57any other kind of institution,
  • 34:59but it is to say that it is.
  • 35:03And an enormous problem that we don't
  • 35:07know about their effects right?
  • 35:09And so. Of course,
  • 35:10this is averaging across any
  • 35:11kind of diversity training,
  • 35:13and I think that we can all think
  • 35:15of diversity trainings that we've
  • 35:16experienced or observed that we didn't
  • 35:18think would have a positive effect
  • 35:19and maybe some that we thought.
  • 35:21Well, this is this.
  • 35:22This is quite good, right?
  • 35:24But there's no distinguishing among them,
  • 35:26right?
  • 35:26There's two studies in the past
  • 35:28dozen years that have actually
  • 35:30looked at their causal effects.
  • 35:32And implicit bias has been something
  • 35:34that we've talked a great deal
  • 35:36about in the past dozen years.
  • 35:38However,
  • 35:39implicit bias trainings were included
  • 35:42in that diversity training category.
  • 35:45This is a surgeon category.
  • 35:46There are a couple of investigators
  • 35:48who I know of who have been producing
  • 35:50more work on trying to understand the
  • 35:53impacts of implicit bias training.
  • 35:54In particular,
  • 35:55the other thing that we looked
  • 35:56for in this category is we just
  • 35:58wanted to know there are some
  • 36:00really good meta analysis.
  • 36:01I could refer you to if you're interested on
  • 36:04the extent to which implicit bias can change,
  • 36:06period, right?
  • 36:07Even in you know basic lab studies.
  • 36:11We didn't include those here because
  • 36:13they weren't actual interventions.
  • 36:14What we were interested in here though,
  • 36:15is just.
  • 36:16What's the functional interdependence?
  • 36:18What's the relationship between
  • 36:20implicit bias and behavior?
  • 36:21So forget about implicit bias training,
  • 36:25what intervention out there,
  • 36:27anything it could be contact.
  • 36:30It could be emotional.
  • 36:32Regulation it could be
  • 36:34multicultural education.
  • 36:35Do any of them change implicit bias?
  • 36:38And if they do,
  • 36:39it does behavior also change,
  • 36:41right?
  • 36:41So we were really interested if there
  • 36:43were any studies that measured implicit
  • 36:45bias or behavior as an outcome.
  • 36:46We captured both of those.
  • 36:49Both of those outcomes,
  • 36:50whether or not they were reported
  • 36:51in the abstract or not.
  • 36:52'cause we were so curious about
  • 36:54this question and I'm very
  • 36:55sorry to tell you that again,
  • 36:57this seems to be a magic number.
  • 36:58There are two studies in the entire
  • 37:01corpus from the past dozen years.
  • 37:03That captured both implicit bias
  • 37:05and behavior as an outcome in
  • 37:07any kind of intervention study,
  • 37:09so we really can't tell you
  • 37:12what we know about interventions
  • 37:14changing implicit bias,
  • 37:16and they expect which that
  • 37:17is expressed in what.
  • 37:18I think we can.
  • 37:19I think we could agree on that.
  • 37:21We care most about which is behavior the
  • 37:24expression of of prejudice and bias,
  • 37:26right?
  • 37:27Discrimination, hate crime,
  • 37:29microaggressions,
  • 37:30all of the things that we care about.
  • 37:31OK, and then the final thing that I
  • 37:33was really attentive to as a psychologist,
  • 37:35because this is something in my
  • 37:36field that I hear quite a bit about.
  • 37:38Curious the extent to which this
  • 37:40is discussed in in in psychiatry.
  • 37:42Is that OK?
  • 37:44Here goes the argument.
  • 37:46OK,
  • 37:46this is a very small change.
  • 37:48This is small effect size
  • 37:49that neither observed.
  • 37:50But this is something that can
  • 37:52build overtime. So in essence this
  • 37:54can become self reinforcing people.
  • 37:56Small attitude changes small changes in
  • 37:59their emotional regulation around outgroups.
  • 38:02Gonna have this positive reinforcement cycle.
  • 38:05So it's a it's a perfectly
  • 38:08valid theory of change,
  • 38:10and we could find no evidence for it.
  • 38:12But here again I want to be clear there's
  • 38:14an absence of evidence and the way you
  • 38:16look for it is you look to see whether
  • 38:18any of these studies are measuring,
  • 38:20change overtime longitudinally,
  • 38:22and we found very few studies that did so.
  • 38:27To the extent that it's not even
  • 38:29worth mentioning what they found,
  • 38:30because you know it was, you know,
  • 38:32a few out of a body of of 400 plus OK.
  • 38:37Alright, so the best research
  • 38:38shows very small effects.
  • 38:39That's the final argument that I made
  • 38:41at the beginning, and I call this my.
  • 38:43You know once more with feeling figure
  • 38:45'cause I've already showed you these data,
  • 38:47these are essentially the data from
  • 38:49the table and then from the graph and,
  • 38:51and this is charting the D,
  • 38:53the effect size right on the Y axis.
  • 38:56How?
  • 38:56How big of an effect do we find against
  • 39:00the standard error on on the X and
  • 39:03what's important about this figure
  • 39:04that's different is that you know these
  • 39:06all of these dots are a different study.
  • 39:08Right,
  • 39:08and it's robust to the studies
  • 39:09that are those outliers up there.
  • 39:10But I just want to show you all the data.
  • 39:12I'm not trimming anything,
  • 39:13so these are all the studies
  • 39:15in the meta analysis and this
  • 39:17fitted regression line tilting,
  • 39:19tilting downward to the left.
  • 39:21This is what this line says.
  • 39:25I've just complained to you about
  • 39:26all of these methodological problems.
  • 39:28You know there's a lot of error
  • 39:29in these studies,
  • 39:30so you know and and there's a
  • 39:33lot of unrealistic effect sizes.
  • 39:36But the lines tilt shows you that
  • 39:39if we were just to spend the next
  • 39:42dozen years testing the same ideas,
  • 39:44the same interventions,
  • 39:46and just tightening our methods,
  • 39:47being much better about it.
  • 39:49Preregistering much larger sample sizes etc.
  • 39:52Etc.
  • 39:52What this line suggests is that
  • 39:55we would just keep finding smaller
  • 39:57and smaller effects right,
  • 40:00and the line in fact crosses 0.
  • 40:02So the line is suggesting again,
  • 40:04this is a prediction out of sample
  • 40:05is that if we just kept doing this?
  • 40:07If I could just keep simulating these same.
  • 40:11Interventions with larger and
  • 40:13larger and better methods.
  • 40:15We might actually find out that
  • 40:17we aren't having an effect, OK?
  • 40:19So that's the most depressing
  • 40:21argument of this of this paper.
  • 40:24But I think it's a good place to
  • 40:26pivot onto what the next generation
  • 40:28of prejudice reduction research
  • 40:29should look like.
  • 40:30And we we have a lot of
  • 40:33recommendations in in our paper and we,
  • 40:35we give those recommendations to
  • 40:38people who are interested in studying,
  • 40:40designing and studying prejudice
  • 40:43reduction interventions,
  • 40:44both for the laboratory,
  • 40:45which we think is extremely important.
  • 40:47Even though I I myself,
  • 40:48prioritize working in the field.
  • 40:50But you know, especially for you know,
  • 40:52research and development purposes we.
  • 40:54We see the lab is extremely important and
  • 40:56and for those interested in field work,
  • 40:59I want to talk about something else.
  • 41:00Though I'm not going to go through
  • 41:02those recommendations today,
  • 41:03I want to talk about the way we've
  • 41:07been thinking about changing the way
  • 41:09we think about the interventions
  • 41:11themselves and using in fact,
  • 41:13a different model of change and
  • 41:15in thinking more about structural
  • 41:17interventions and their effects. OK,
  • 41:19so are we using the right model of change?
  • 41:22Very mindful of my audience.
  • 41:24I mean I'm always mindful of saying this
  • 41:25even in front of my psychology audiences,
  • 41:28but the current model of change is 1
  • 41:30in which we really even though I know
  • 41:33that these investigators don't believe
  • 41:35that racism and religious prejudice
  • 41:37and ethnic bias and all of these
  • 41:39other prejudices that are studies,
  • 41:40even though I know these these
  • 41:42investigators don't really believe that
  • 41:44it's just a psychological problem,
  • 41:45they understand it's structural,
  • 41:47it's really conceptualized as
  • 41:48purely a psychological problem.
  • 41:50In all of these interventions right?
  • 41:52And So what we then do is we create these
  • 41:55highly individualistic interventions right?
  • 41:58These these mentalizing kinds of
  • 42:00interventions in order to create
  • 42:02individual psychological change as
  • 42:04well as social societal change.
  • 42:06So it's this bottom up cumulative.
  • 42:09Theory of social change and I wanna I
  • 42:11wanna talk about this alternative model
  • 42:13which is to attack a psychological
  • 42:16problem with a structural intervention
  • 42:18in order to create individual
  • 42:20psychological change.
  • 42:21I don't want to throw out mental
  • 42:23life as a target of intervention,
  • 42:25but I want to think about what
  • 42:28intervention might produce a larger effect,
  • 42:30potentially right?
  • 42:31Again, this is our editorial at the end.
  • 42:35What do we mean by structural
  • 42:36interventions and is this something
  • 42:38that psychologists and psychiatrists
  • 42:39can can participate in? I think so.
  • 42:42Structure of course, means institutions,
  • 42:45rules leaders.
  • 42:46So the changing of laws and rules
  • 42:49and organizational procedures,
  • 42:52the decisions and communications
  • 42:53from leaders absolutely.
  • 42:55And and this is what is traditionally
  • 42:58conceptualized as structural by all
  • 43:00of my social science colleagues.
  • 43:02But we also want us to think about.
  • 43:04Social structures,
  • 43:06so these are the levers that these
  • 43:10are the levers of of change that
  • 43:13involve collectives,
  • 43:15but times when that kind of
  • 43:18collective signal is not sparked
  • 43:20by these traditional structures,
  • 43:21but rather by more unofficial
  • 43:24social grouping.
  • 43:25So these mass collective
  • 43:27experiences that we can have in
  • 43:29media unofficial organizations,
  • 43:31my graduate student and I were thinking
  • 43:33about how to give an example of this,
  • 43:35and we thought.
  • 43:36You know,
  • 43:36there's plenty of unofficial organizations
  • 43:38that influence influences all the time.
  • 43:41And she mentioned Black Twitter,
  • 43:43which you know does not have a board
  • 43:45of directors but has been extremely
  • 43:47influential in guiding conversations
  • 43:49around race and and culture and and politics,
  • 43:52right?
  • 43:53Mass media events in person gatherings,
  • 43:56zoom gatherings,
  • 43:58simultaneous collective experiences.
  • 44:00This is hard, though,
  • 44:02because behavioral theory,
  • 44:03psychological theory only
  • 44:05sometimes even mentions structure.
  • 44:06In it and so let me start with
  • 44:10some examples of theory that does
  • 44:12relate to structure and and to
  • 44:14give you examples of how I think.
  • 44:16In the past dozen years and plus
  • 44:18we've used a lot of psychological
  • 44:20theory about prejudice to design
  • 44:22interventions that are less structural,
  • 44:24but we could design them to be more
  • 44:27structural and and so here's my example.
  • 44:29Social norms theory,
  • 44:30which is a theory we work
  • 44:32with a lot in my lab,
  • 44:33does make predictions about leadership about
  • 44:35how leaders can signal new social norms.
  • 44:38About what is typical,
  • 44:40what is desirable?
  • 44:41Regarding prejudice and many other things,
  • 44:43of course.
  • 44:44And you know one thing that we've been
  • 44:46trying to invest in is to investigate
  • 44:49attitude and perceived norm change in
  • 44:52response to Supreme Court decisions.
  • 44:54To see the extent to which Supreme
  • 44:57Court decisions about marginalized nized
  • 45:00groups change the way we feel about them,
  • 45:04think about them and the way we
  • 45:06think that other people residing in
  • 45:08the United States think about them.
  • 45:09But, uh, less structural intervention.
  • 45:11Based on this theory and one,
  • 45:13this is an approach that you see a lot in.
  • 45:16The meta analysis would be to send
  • 45:18emails to people just individual
  • 45:20prompts reminding them about the,
  • 45:22say progressive orientation of their leader,
  • 45:24right?
  • 45:24And so it's a completely different
  • 45:26experience to read an email
  • 45:28that's addressed just to you,
  • 45:30but I think that this example is just
  • 45:31trying to highlight this approach.
  • 45:33Both are testing the same idea,
  • 45:35but would we expect one to have a
  • 45:37much bigger effect than the other?
  • 45:38We would, and so where should we putting?
  • 45:41We be putting our?
  • 45:42Energy is essentially in testing.
  • 45:43Some of these theories.
  • 45:45Let me take an individually oriented
  • 45:48theory that doesn't really mention
  • 45:50structure as we've defined it perspective.
  • 45:52Taking theory is something that's
  • 45:55that's very Sergeant right now
  • 45:57in the literature on prejudice
  • 45:59reduction and also attitude change
  • 46:02and persuasion in particular.
  • 46:03Scholars.
  • 46:04In the past I would say 8 to 10
  • 46:07years have been very interested in
  • 46:09ideas about perspective giving,
  • 46:11so not asking a person to try to imagine.
  • 46:16Or to read about and then simulate
  • 46:18the perspective of others right?
  • 46:20But actually to sit in nonjudgmental,
  • 46:23listening about and and while they
  • 46:25listen to the perspective of others
  • 46:27and in particular members of oppressed,
  • 46:30marginalized,
  • 46:30stigmatized group.
  • 46:31So that's called perspective giving
  • 46:33where you know the the onus is not
  • 46:35on imagining it's it's it's on or
  • 46:37the emphasis is not on imagining
  • 46:39the perspectives, taking it,
  • 46:40but rather listening, taking it in,
  • 46:43giving and and then the person gives it.
  • 46:45So here's an example of a very.
  • 46:47What I would describe as a as a
  • 46:49social structural intervention
  • 46:50that tests this hypothesis.
  • 46:52So a famous study that you probably
  • 46:54have read about in the paper
  • 46:57by Brockman and Kala.
  • 46:58A used perspective,
  • 47:00taking with canvassers who
  • 47:03were organizing around
  • 47:05issues of transgender rights in in
  • 47:08Florida and testing whether going door
  • 47:11to door and asking those who answered
  • 47:15the door not to just listen to them
  • 47:18about about the the issue in particular.
  • 47:21In this first study was about why
  • 47:25transgender individuals should use the the
  • 47:27correct bathroom that that reflects their.
  • 47:29Their gender instead they would knock
  • 47:32on the door and ask the person who
  • 47:34answered to tell them about a time when
  • 47:36they felt that they had been excluded,
  • 47:39marginalized for some aspect of their
  • 47:41identity, and to listen to them and to
  • 47:43that experience and then to relate that to
  • 47:46the reason why they were canvassing today.
  • 47:48To say that it was.
  • 47:49It was similar to some of the
  • 47:51issues that transgender people face.
  • 47:53So what's structural about that?
  • 47:56Because you know another way to test
  • 47:58that idea is, you know, to text.
  • 48:00People maybe a little nudge,
  • 48:01stimulate perspective getting and sign
  • 48:03up for text service and you know,
  • 48:05get a get a message every day.
  • 48:06You know.
  • 48:07Try to try to think about transgender
  • 48:08people and how it feels for them
  • 48:10to blah blah blah right what?
  • 48:11What structural about the first one to me?
  • 48:14It's this collective that's brought
  • 48:15into the experience.
  • 48:16So sure, it's a dyadic intervention.
  • 48:18Or maybe there's a triad there's.
  • 48:20There's usually two people canvassing and
  • 48:22listening to this person at the door,
  • 48:24but I think that there's this
  • 48:26imagined collective to it, right?
  • 48:27Because when you open the door and canvases?
  • 48:30Arrived to talk to you but you know
  • 48:31is that those campuses are going to
  • 48:33everybody else in your neighborhood,
  • 48:35right?
  • 48:35So your neighbors are having this
  • 48:36experience at the same time,
  • 48:38and you know also that these canvassers
  • 48:41represent a larger collective and an
  • 48:44organized group of political movements.
  • 48:46And so you're,
  • 48:47you're coming into contact with
  • 48:49something quite larger that these
  • 48:51people are are representing.
  • 48:53And so I think that my prior would be
  • 48:56that this this kind of intervention,
  • 48:59right?
  • 49:00Using the same theory,
  • 49:01I don't think we should.
  • 49:02Throw out our theories would be
  • 49:05more effective,
  • 49:06and indeed I mean this is just
  • 49:08cherry picking an example for you,
  • 49:09but this is a study that has gained
  • 49:11so much traction in part because
  • 49:14the intervention has quite long
  • 49:16lasting effects and so these these
  • 49:19investigators have now repeated this.
  • 49:22The intervention with many different
  • 49:24issues and targeting many different
  • 49:26marginalized groups and they have a
  • 49:28completely ingenious measurement strategy,
  • 49:30which is to survey people online
  • 49:33about these issues and ostensibly
  • 49:36unrelated voter survey,
  • 49:38and they find stable attitude change.
  • 49:42Following these this canvas or visit
  • 49:44that's even resistant to things like attack,
  • 49:48ads and so forth when they they feature
  • 49:50kind of the other side of these.
  • 49:52Issues on these voter polls.
  • 49:56OK.
  • 49:58Right? Because of zoom,
  • 50:00you think I'd be better.
  • 50:02Because I do, my just can't see my last .0.
  • 50:04Yes and there are many other
  • 50:06structural intervention examples
  • 50:07that I could get into it,
  • 50:08but I I do want to leave time for questions,
  • 50:10so I'm going to move to the end now.
  • 50:14So what would this require?
  • 50:16This kind of next generation, this this?
  • 50:18These proposals about
  • 50:20trying to use our theories,
  • 50:22but to design more,
  • 50:25more structural interventions with them.
  • 50:27First of all, I think that it would
  • 50:29demand of social scientists that we
  • 50:31improve our skills at thinking about
  • 50:33structural expressions of our theories.
  • 50:35I think that we are we default often to
  • 50:37thinking about these very individualized,
  • 50:39personally delivered interventions.
  • 50:41I think that we would have.
  • 50:44To engage in much more collaborative
  • 50:46work across disciplines.
  • 50:48Because some of us specialize
  • 50:50in these theories.
  • 50:51Some of us specialize in understanding
  • 50:53how to measure aspects of
  • 50:56mental life and others of us are
  • 50:58actually at the table when.
  • 51:00Political campaigns are designed
  • 51:02or new infrastructures in our
  • 51:04communities are built and I think that
  • 51:07collaborating together to think about
  • 51:09you know these actual structures.
  • 51:11These social structures and and
  • 51:13how they can be used to test
  • 51:16ideas about prejudice reduction
  • 51:17would be much more fruitful.
  • 51:19I also think that we need to more
  • 51:22seriously invest in research on how
  • 51:25more top down interventions can lead
  • 51:27to backlash or to do to resistance.
  • 51:32I think that it opens up this really
  • 51:34interesting space for social scientists
  • 51:36who aren't necessarily involved
  • 51:38at the moment in equity reform to
  • 51:40also be invested in equity reform,
  • 51:42because equity reform,
  • 51:43as I see it, so you know,
  • 51:45outside of prejudice reduction in research,
  • 51:48things like improving hiring practices,
  • 51:52retention practices,
  • 51:53improving the climate of universities,
  • 51:58corporations, communities.
  • 52:01You know the the the reforms that are
  • 52:05being asked for are most often being asked
  • 52:07for on the basis of justice and values,
  • 52:10which is absolutely appropriate and
  • 52:11should be the leading rationale for
  • 52:14why these reforms should be made.
  • 52:15But I think that this actually adds
  • 52:17if we're going to seriously pursue
  • 52:20more structural interventions and
  • 52:21try to understand the psychological
  • 52:24changes that we get from them that
  • 52:26would actually add more social
  • 52:28scientists to that push,
  • 52:29because they'd be interested
  • 52:30in studying these.
  • 52:31These changes prospectively right,
  • 52:33and so I I, I think that that's an
  • 52:38interesting outcome of this kind of call.
  • 52:40I, I think also just methodologically
  • 52:42we're going to have to get a lot more
  • 52:44familiar with or collaborate with other
  • 52:46social scientists who can help us to
  • 52:48design studies that aren't just little,
  • 52:51you know,
  • 52:52two by two you know experiments,
  • 52:54but use, you know,
  • 52:56more creative strategies for
  • 52:58studying the the the real world.
  • 53:00And it's it's thicket of various variables
  • 53:03and and threats to causal inference.
  • 53:07And so I'm going to end there and thank
  • 53:09you so much for your attention and I'd
  • 53:12love to hear questions and feedback.