Skip to Main Content

Janine Bijsterbosch “Evaluating intrinsic brain organization in individuals to identify causes of network overlap”

March 07, 2023
ID
9610

Transcript

  • 00:07Our next speaker is Janine from Washu.
  • 00:11Thank you. I'm excited to be here, right?
  • 00:15This is one of my favorite conferences,
  • 00:17but snowboarding, it's got many of
  • 00:19my favorite people good discussions
  • 00:21like we've already had so. Can't wait.
  • 00:25I would class this hawk as like the
  • 00:28sticking my head in the sand and talk.
  • 00:31Because I agree with a lot of
  • 00:32things that have been said so far.
  • 00:34What I'm actually going to do is do a deep,
  • 00:35deep dive on just one study
  • 00:37that we've done in the lab.
  • 00:39And I'm largely going to ignore behavior
  • 00:41altogether and just look at some brains.
  • 00:43So let's get back to that. Hope
  • 00:46you're on board with that. So
  • 00:48I'm going to talk about evaluating
  • 00:50weighted network organization and
  • 00:52individual differences within that
  • 00:53and maybe what that might tell us
  • 00:56about overlapping organization.
  • 00:58So I don't think I need to
  • 01:00convince anyone here that there are
  • 01:02important individual differences
  • 01:04in functional brain organization.
  • 01:06These are a couple of studies using
  • 01:09different methods from people
  • 01:10in the room showing just that.
  • 01:12And so these studies are showing
  • 01:15strikingly large differences in
  • 01:17functional organization between
  • 01:19people that are relatively stable and
  • 01:22therefore seems to not just be noise,
  • 01:24but they may be meaningful.
  • 01:29Within individuals.
  • 01:30In some of our earlier work we did
  • 01:33indeed show that these and this is
  • 01:35the only behavior built in here,
  • 01:37that these might be meaningful and
  • 01:40that we did this was a canonical
  • 01:42correlation analysis using human
  • 01:44Connectome project data.
  • 01:45We fed in a bunch of subjects
  • 01:48specific networks and we tried to
  • 01:51predict behavioral measures and
  • 01:53we did indeed find that spatial
  • 01:55network topology predicted behavior
  • 01:57relatively well and and better most
  • 01:59importantly than some other features.
  • 02:01Commercial state,
  • 02:02they don't like network agencies
  • 02:04are going to matrices.
  • 02:06And so this video is showing two networks,
  • 02:08one on the left, one on the right.
  • 02:10And kind of as it can as we cycle
  • 02:12through the behavioral continuum
  • 02:14of kind of more negative traits
  • 02:16to more positive trades,
  • 02:18how these two networks have more of in
  • 02:21terms of their shapes as functional behavior.
  • 02:24And to some degree this has been
  • 02:26replicated again by some people
  • 02:28in the room using a different
  • 02:30approach but also showing that.
  • 02:32And accurately capturing these
  • 02:34individual differences in the spatial
  • 02:36ecology of resting state organization
  • 02:39helps improve behavior prediction.
  • 02:44In some follow up work,
  • 02:45we looked at the role that
  • 02:47network overlaps plays in this.
  • 02:49And so here there's these are two kind of,
  • 02:52you know, cartoonish networks that might
  • 02:54in some people not overlap at all.
  • 02:56And then there might be a continuum of
  • 02:59individual differences in in terms of the
  • 03:02degree of overlapping network organization.
  • 03:04And we found that that seems to be
  • 03:06important with all of the caveats
  • 03:08of behavior prediction beforehand.
  • 03:09So I'm showing you the same video
  • 03:11here of cycling through that
  • 03:13same kind of continuum of.
  • 03:14Negative to positive traits,
  • 03:16but instead of showing individual networks,
  • 03:19I'm showing the essentially the sums or
  • 03:21the number of at each vertex number of
  • 03:23networks that contribute at that vertex.
  • 03:25So the brighter the yellow,
  • 03:27the more overlap there is.
  • 03:28And this actually explained most
  • 03:30of the association between our
  • 03:31network topology and behavior.
  • 03:33So this seems to be something
  • 03:35that is maybe a a feature that is.
  • 03:42So. We the the interest so far has
  • 03:44kind of touched on I've shown one
  • 03:47brains because I I promised you
  • 03:49would get back to the brains but I've
  • 03:52shown you features from them kind of
  • 03:54from different analysis pipelines.
  • 03:57This is the analysis pipeline
  • 03:58that we use a lot.
  • 03:59It's probabilistic functional
  • 04:00modes or Profumo or PFM.
  • 04:02It was developed by Sam Harrison
  • 04:04and and Steve Smith in Oxford.
  • 04:06It's essentially like a a slightly better
  • 04:09version of ICA push or dual aggression.
  • 04:13So it's.
  • 04:14On other product model, just like ICA,
  • 04:16but it has some nice features
  • 04:19in that it's it's hierarchical,
  • 04:21so it has both a group model
  • 04:24estimator in there and I'm subject
  • 04:26specific estimator in there,
  • 04:27and it iteratively optimizes both.
  • 04:30It also removes the independence constraint,
  • 04:32which is of course important if
  • 04:34you want to accurately estimate
  • 04:36overlap by using some other priors
  • 04:38in this Beijing framework,
  • 04:39like the response function.
  • 04:41One thing I'll say here that
  • 04:43is a little bit more.
  • 04:44Philosophical or maybe a point for
  • 04:46discussion later at the reception is
  • 04:48that I think we have so many different
  • 04:51ways of analyzing and like reducing
  • 04:53the dimensionality of resting state data.
  • 04:55And I've I've showed many of them here and
  • 04:59that's that's amazing and that's useful.
  • 05:01But at the same time I think it
  • 05:04sometimes makes it confusing and
  • 05:05difficult because sometimes I find
  • 05:07it hard to understand how my result
  • 05:09map on to results that someone else
  • 05:11might find with the difference kind of
  • 05:13method in a different representation.
  • 05:15And so I think as a field we can
  • 05:17do more coming to the kind of
  • 05:19interoperable AI and what our values are.
  • 05:21I think we can,
  • 05:22it's often the same underlying source
  • 05:24of variance in the data that might
  • 05:26be driving our results that we're
  • 05:28looking at from different angles.
  • 05:29We're using different methods.
  • 05:30And so I think to to kind of boost that,
  • 05:33I think we can do more in terms
  • 05:35of trying to build bridges across
  • 05:37different analysis or different
  • 05:39brain representations to try to
  • 05:40kind of improve our understanding
  • 05:41of the relationship between our
  • 05:43way of looking at the data.
  • 05:46And the kind of underlying
  • 05:47source of variance in the data,
  • 05:49that's just philosophical
  • 05:50as I'm going to get, so.
  • 05:53Back to grade?
  • 05:56So what do I want to ask?
  • 06:00The first question that I want to ask is,
  • 06:02can I'm going to focus on human
  • 06:04can we reliably estimate weighted
  • 06:06resting state networks using
  • 06:07data from only a single subject?
  • 06:09And that might be, you know, Profumo
  • 06:11was explicitly developed to have like,
  • 06:13this group and subject level hierarchy
  • 06:16to achieve correspondence across people.
  • 06:18And so this is kind of moving
  • 06:20away from that goal of perfumo,
  • 06:22but I think at the same time there's many.
  • 06:25Applications in which we want
  • 06:26to do just this.
  • 06:27You know, if we're doing a deep
  • 06:29phenotyping sort of midnight scan
  • 06:31Club type study where we're scanning
  • 06:33few people there very long time,
  • 06:36we want to estimate networks
  • 06:37just from individual people.
  • 06:39If we want to do some translational
  • 06:41work in animals,
  • 06:41which I would say is something that
  • 06:43we have to do more of to really gain a
  • 06:46better understanding of what we're measuring,
  • 06:47then you know,
  • 06:48we're doing monkey work.
  • 06:49We can't get that many monkeys that
  • 06:51we might want to do individual
  • 06:53exclamation from the single subject.
  • 06:55And also ultimately if we want to
  • 06:57translate this to patients and we're
  • 06:59seeing that individual 65 year old
  • 07:02that's even was talking about in
  • 07:04front of us and we want to make a
  • 07:06prediction on that page and we'll
  • 07:07presumably we want to just be able to
  • 07:09analyze the data from that patient.
  • 07:11I mean we'll probably compare it in
  • 07:13some sort of normative way to other people,
  • 07:15but we don't have to have the computational
  • 07:18cost that we analyze everything.
  • 07:19So I think this is an important question
  • 07:22from a few different applications.
  • 07:24Then I want to ask is is spatial
  • 07:26overlap present in individual subjects?
  • 07:28It could just be driven by the group bias
  • 07:30because group averaging is essentially
  • 07:32smoothing and blah blah tying our things.
  • 07:35So maybe it's just a groupies aspect.
  • 07:37And then the last thing I want to
  • 07:40ask or touch on a little bit is,
  • 07:42is is try and get to the mechanistic
  • 07:44level and try and ask can we understand
  • 07:47better what gives rise and how should
  • 07:49we understand these features of spatial
  • 07:51overlapping that work organization?
  • 07:55So I use data from the
  • 07:57Human Connectome project.
  • 07:57This is only 20 subjects because
  • 07:59these are the all of the 20
  • 08:02subjects that I have 12 scans.
  • 08:03So these are people that did 3T3T3 tests
  • 08:07and and 78 includes 78 twin pairs.
  • 08:11And we did perfume on a
  • 08:12bunch of different ways.
  • 08:13So a standard group analysis,
  • 08:15a single subject analysis,
  • 08:17a splitting half single
  • 08:19subject analysis and kind of
  • 08:21systematically adding more data in.
  • 08:23Just look at data requirements analysis.
  • 08:26Um, A group analysis about
  • 08:28matching the amount of data that
  • 08:30we have in individual subjects.
  • 08:33And then we did this at
  • 08:34the dimensionality 20.
  • 08:35It's always arbitrary,
  • 08:36but we wanted to look at kind of
  • 08:39the canonical large scale networks.
  • 08:40And then within these 20 networks,
  • 08:42we look for networks that have
  • 08:44both high within subject split
  • 08:46reliability and also at a high kind
  • 08:49of subject to group correlation.
  • 08:51So both testing kind of
  • 08:53reproducibility at the subject
  • 08:54level and correspondence with the
  • 08:57group level or across subjects.
  • 09:01Proposed brains here, a lot of brains.
  • 09:04These are the group modes.
  • 09:05So the top row is using the the main
  • 09:09standard analysis in the bottom row is
  • 09:11essentially the same thing with using less
  • 09:14data to match the amount of data that
  • 09:16we have in the subject level results.
  • 09:18I think there's no big surprises here.
  • 09:20I think many of you probably
  • 09:22recognize a lot of these networks.
  • 09:24You get less SNR when you use less data.
  • 09:27No big surprise there either.
  • 09:30That seems to impact the negative.
  • 09:31It's little bit more so
  • 09:32than the positive weights,
  • 09:34which might be interesting.
  • 09:35But let's get to the individual differences.
  • 09:38So this here is the same thing
  • 09:41on the same set of results,
  • 09:44but from an individual subject.
  • 09:45So the top row this time is when the analysis
  • 09:48is run using data from just that subject.
  • 09:51So there's no group kind of information
  • 09:54that the model sees at any one point.
  • 09:56And the bottom row is the same
  • 09:58subject specific estimate from the
  • 10:00hierarchical model in which there is
  • 10:02kind of the group level above it.
  • 10:04And so it looks,
  • 10:06you know,
  • 10:07there's a lot less lobbing this as I was
  • 10:09saying compared with the previous slide,
  • 10:11you know,
  • 10:11there's a lot more kind of detail in there.
  • 10:14If you zoom in on it a little bit
  • 10:16in some areas there's you seem to be
  • 10:18getting that really kind of subject
  • 10:20specific organization and those features.
  • 10:22And if I go to a different subject and
  • 10:25kind of toggle back and forth between them,
  • 10:28then you can see,
  • 10:28you can start to see,
  • 10:30you know these are recognizably the
  • 10:32same networks and yet you see these.
  • 10:34Relatively substantial shifts in,
  • 10:36in and and kind of changes in organization.
  • 10:44This is a sort of winner.
  • 10:46Take whole representation or find the
  • 10:49biggest representation with for all subjects.
  • 10:51I'm not going to and they're kind of
  • 10:54grouped into twins for the sake of time.
  • 10:57I'm not going to spend too much time on this.
  • 10:58You can stare at this for a long
  • 11:00time and I don't know that,
  • 11:02but I want to talk about this.
  • 11:03So these are some results.
  • 11:05On the most left hand side
  • 11:07is test retest reliability.
  • 11:08So within individual subjects
  • 11:12there's a dot is a is one of the 12.
  • 11:14That works.
  • 11:15And is there for all of the subjects.
  • 11:17I highlighted in red and green,
  • 11:18the example subjects,
  • 11:19just so that I can show you that they're not.
  • 11:21They're like middle of the road subjects.
  • 11:23They're not particularly fancy
  • 11:24or good or bad.
  • 11:26The 2nd row in is subject
  • 11:29to group similarity,
  • 11:31so that's essentially quantifying these two.
  • 11:34The the correlation between the
  • 11:36top and the bottom row here.
  • 11:38And that's interesting.
  • 11:39One of the things I think we worry
  • 11:42about when we're estimating from
  • 11:44suggesting single subject is correspondence.
  • 11:46You know how do we,
  • 11:48how do we relate it back to other
  • 11:49people and can we compare apples
  • 11:51to apples or are we now you
  • 11:52know comparing apples oranges.
  • 11:54And I think the results were kind
  • 11:56of surprising to me in terms of at
  • 11:58least at this level of dimensionality
  • 12:00which is a relatively low you know
  • 12:02large scale network dimensionality we
  • 12:05we get pretty high correspondence.
  • 12:08Without enforcing that in in a
  • 12:11in in the model itself.
  • 12:13The other interesting thing is that
  • 12:14you kind of see this stepping effect.
  • 12:16So these two on the left,
  • 12:17these two columns on the left
  • 12:19are kind of within subjects,
  • 12:21the next two are twin and the next two
  • 12:23are between subject for non twins.
  • 12:26And so you can see you know within
  • 12:28subjects is most reliable twins or
  • 12:29next most reliable and then you
  • 12:31get a a slight decrease to decrease
  • 12:33in the next one.
  • 12:34Another thing you see is this
  • 12:37systematic kind of bomb.
  • 12:39So this this twin similarity,
  • 12:41the middle one here is using
  • 12:43individual data and then the next
  • 12:45one I was using group informs data
  • 12:48and then it's the same thing here
  • 12:50for between non trained subjects.
  • 12:52Individual day runs through runs
  • 12:54and so you do see that group by
  • 12:57aspects there a little bit,
  • 12:58but you see that you achieve correspondence
  • 13:01even without needing that group bias.
  • 13:05I'm not going to dwell on this.
  • 13:07You the more data you get,
  • 13:08the better you do.
  • 13:09And obviously you're going to need
  • 13:11high quality data to do this.
  • 13:15The next question was is there overlap,
  • 13:18is there a spatial overlap?
  • 13:19And the answer is yes there is.
  • 13:21So on the left hand side here is a
  • 13:24is a kind of overlap matrix at the
  • 13:26group level and on the right hand side
  • 13:29it's showing a surface area for each
  • 13:31individual of how many networks overlap
  • 13:34within that subject within that area.
  • 13:37And this is showing that
  • 13:38on the on the spatial map,
  • 13:40so you can see again for the
  • 13:42same two example subjects,
  • 13:43you can see that the main areas of
  • 13:46overlap are in that temple parietal
  • 13:49occipital junction broad area,
  • 13:52which I think is interesting
  • 13:53because I think with other methods
  • 13:55like perturbation methods that's
  • 13:56also where you see a large amount
  • 13:58of individual differences.
  • 13:59So that again comes back to this
  • 14:01question of how do we how do we interpret
  • 14:03results from different methods and
  • 14:05how do they kind of tap into the same.
  • 14:07Underlying variance in the data.
  • 14:12And so with my last few minutes,
  • 14:13I want to I want to try and understand.
  • 14:16I think these overlap areas are
  • 14:18are interesting and important and
  • 14:20so I want to try and understand
  • 14:22them a little bit better.
  • 14:23I have 3 hypothesis about
  • 14:24what might be happening.
  • 14:25If you have more please let me know.
  • 14:29The first one is that it the, the, the,
  • 14:31the kind of mixing I've also so it might
  • 14:33be this is the idea of interdigitation.
  • 14:36So maybe there's like ribbons of
  • 14:38network one and network two kind of
  • 14:40you know alternating and we're not
  • 14:42quite resolving that based on our data.
  • 14:45The second hypothesis is a is a
  • 14:47kind of a dynamic hypothesis.
  • 14:49So maybe what's happening in this
  • 14:50overlap area is that it's either part
  • 14:52of network one or part of Network 2,
  • 14:54but it's kind of going back and
  • 14:56forth in time between them.
  • 14:57And of course given that perfumo
  • 14:59is a is a stationary method that
  • 15:02would show up as as kind of overlap.
  • 15:05Or maybe the third one and maybe the most
  • 15:08exciting one is a coupling hypothesis.
  • 15:10Is this area really kind of part of
  • 15:12both networks at the same time and
  • 15:14that could be really cool in terms of,
  • 15:16you know,
  • 15:16maybe there's some sort of integration or
  • 15:18meaningful processing and cross network
  • 15:20processing happening in these regions?
  • 15:24And so I'm going to try and ask this
  • 15:28question, trying to test these different
  • 15:30hypothesis using two networks of interest.
  • 15:33So there's the red one, there's yellow one,
  • 15:35you can see there's the areas of orange
  • 15:38are kind of overlap between them.
  • 15:40And if we look at these are
  • 15:42the two examples subjects.
  • 15:43So you see there's individual differences,
  • 15:45some have more overlaps, less overlap.
  • 15:48What I want to do is essentially isolate
  • 15:51vertices that are just network one,
  • 15:54nothing else, just network 2, nothing else,
  • 15:57and just overlap between those two and
  • 16:00they're kind of third or fourth networks.
  • 16:02And so I'm kind of going to clean this out.
  • 16:05There are some bird seeds that are also
  • 16:06going to have contributions from some
  • 16:08of the other networks that we estimated.
  • 16:10Just want to keep this like as clean
  • 16:12and control the comparison as possible.
  • 16:14So that's what I've done here.
  • 16:15So red vertices are in the.
  • 16:19Overlap between these two specific networks
  • 16:22and black R1 and green or the other.
  • 16:25And so now you can estimate average
  • 16:28time courses across these vertices,
  • 16:30and we get a a time course for
  • 16:32each of these three things.
  • 16:34And we can do that.
  • 16:35So this is them completely separately
  • 16:37for each of the 12 people.
  • 16:39Rewrite a few more completely
  • 16:40independently and do all of these steps.
  • 16:43And so now I'm going to we have
  • 16:45the true overlap time series,
  • 16:46but we can also create hypothesis
  • 16:49based overlap similarly simulated
  • 16:50versions of the overlaps comma series.
  • 16:53So for the mixing hypothesis one of
  • 16:55the things I can do is for every for
  • 16:57half of the red vertices I can just
  • 16:59pick a random black vertex and for the
  • 17:02other half I can literally rather than
  • 17:04green vertex and then I can re average them.
  • 17:06And so I get a simulated version using the
  • 17:08data from this individual but not using
  • 17:10any of the red parts of the data, right?
  • 17:13I can do the same thing for the
  • 17:16coupling hypothesis,
  • 17:17so at any time point I can just sum up
  • 17:19the black and the and the green one.
  • 17:21Or maybe multiply them together.
  • 17:22Linear and nonlinear kind of coupling
  • 17:27hypothesis?
  • 17:28Simulated versions?
  • 17:28And then for the switching one,
  • 17:31I can kind of just go back and
  • 17:33forth between them in multiple ways,
  • 17:34different different orders,
  • 17:36different time Windows,
  • 17:38different ways of doing that.
  • 17:39And so now I've come up with
  • 17:42essentially 8 different simulated
  • 17:44versions of the overlap time series
  • 17:46to test these different hypothesis.
  • 17:49And so the first thing I can now.
  • 17:52Ask is which of these countries
  • 17:54are most similar to the actual
  • 17:57original true overlap time series.
  • 17:59And so there's some interesting patterns
  • 18:02here and we talk about that a lot more.
  • 18:04But the winner visually and statistically
  • 18:07is the linear additive coupling.
  • 18:10So that is coupling hypothesis
  • 18:12that just sums up to other regions.
  • 18:17Without having time to go to a
  • 18:19lot of different details, we can.
  • 18:21We also did hidden Markov modeling to get
  • 18:24dynamic states from the three time series.
  • 18:27So network one time series,
  • 18:29network two countries,
  • 18:30and overlap time series.
  • 18:32You can do that using the original
  • 18:34overlap time series and using
  • 18:36all of the simulated versions.
  • 18:38It's a little bit noisier and that
  • 18:39makes sense because it's more
  • 18:41complicated model that we're fitting,
  • 18:42but it's nice to see that the same
  • 18:45kind of hypothesis is winning here.
  • 18:48Which is the linear additive coupling 1.
  • 18:51And so with that,
  • 18:52my first question is can we reliably
  • 18:56estimate these weighted networks?
  • 18:58And the answer is yes,
  • 19:00given sufficient high quality data.
  • 19:03This spatial overlap present, yes,
  • 19:05we see we see significant and pretty
  • 19:07reliable and reproducible spatial
  • 19:09overlap at the individual level.
  • 19:11And then the third question was,
  • 19:12well,
  • 19:13what mechanisms might give rise to that
  • 19:15at this at MRI resolution given the
  • 19:18constraints of what we've done here,
  • 19:21it appears to be the linear
  • 19:23additive problem hypothesis.
  • 19:24I have a lot more questions
  • 19:26about that because of voxel ISM,
  • 19:27big space like we have a lot of
  • 19:29neurons in a voxel and so there might
  • 19:31well be interdigitation at the level.
  • 19:33Smaller than a voxel,
  • 19:35a TR,
  • 19:36even with multiband is also still pretty
  • 19:38long compared with neurons firing.
  • 19:40And so there might well be dynamic
  • 19:43switching at the level at a temporal
  • 19:45resolution below beyond them IR MRI.
  • 19:48And so I'm I have some funding to do
  • 19:50some animal work to try and get more
  • 19:52closer to that and kind of get beyond
  • 19:54the level of MRI to ask those questions.
  • 19:59So with that,
  • 20:01I will thank my collaborators for this
  • 20:04particular project for at Oxford and Washu,
  • 20:07my lab,
  • 20:08my funding and the datasets
  • 20:10that we all work with heavily.
  • 20:13And thank you very much for your time.
  • 20:24Really cool stuff. I would refer
  • 20:28to up your thoughtful hypotheses.
  • 20:30What would be happening?
  • 20:31I thought of two more potential.
  • 20:36Could it be that your dimensionality is
  • 20:40not high enough and there are certain
  • 20:44additional networks in there that are,
  • 20:46I would say in of that,
  • 20:48I think you could either hypothesize
  • 20:51a network kind of in between in
  • 20:54network space like you're, you know,
  • 20:56you have like a default thing and there's
  • 20:58something in between that, right?
  • 21:00It's like halfway one and
  • 21:01halfway to the other, right?
  • 21:03And then when you have a lower.
  • 21:04Really it kind of they overlap, right?
  • 21:08And then the other one I was thinking
  • 21:10of is if you had a network that is not
  • 21:13asserting you're going to mentality,
  • 21:15it's kind of neither one,
  • 21:17but is spatially in between, right.
  • 21:21And you're on especially thinking of
  • 21:24this temporal parietal occipital area
  • 21:26where there's stuff going on with
  • 21:28there and I expect we haven't found
  • 21:30what the actual representation that
  • 21:31representation is in there and if there's,
  • 21:34if there's.
  • 21:36If your signal in there,
  • 21:37but it's not very well captured by any
  • 21:39of the sort of well known networks,
  • 21:42could they just be kind of assigned to
  • 21:45to whatever to to the 20 dimensionalities?
  • 21:48Kind of halfway to the halfway?
  • 21:50OK, so so the question, well,
  • 21:52there was a suggestion of
  • 21:54two alternative hypotheses.
  • 21:56Here. Right, right.
  • 21:58The first alternative hypothesis being
  • 22:01maybe the dimensionality is too low,
  • 22:03and the second hypothesis being
  • 22:05maybe we just don't know what to do
  • 22:07or know none of our models really
  • 22:09knows what to do with those areas.
  • 22:12Yeah. Yeah.
  • 22:13And so we have not in this specific work,
  • 22:17but in other work we have pushed
  • 22:19up the dimensionality of of perfumo
  • 22:21in particular because it behaves
  • 22:23a little bit different from ICA.
  • 22:25For example,
  • 22:25when you push up the dimensionality
  • 22:27and you see this kind of,
  • 22:28you do see this you,
  • 22:30you continue to get the canonical
  • 22:32big picture networks in Profumo even
  • 22:34if you push up the dimensionality
  • 22:36because they don't have to
  • 22:38splinter down smaller because
  • 22:39of the constraints and priors.
  • 22:41You do that to see additional networks
  • 22:43but I I don't think there's networks
  • 22:46map on particularly well they're like
  • 22:48it gets like the whole network and
  • 22:50then it gets sub networks within that.
  • 22:52But I don't think that maps on
  • 22:55particularly closely on to this.
  • 22:57But I I I can move closer into that.
  • 22:59I think you are right that.
  • 23:02They're just there seems to be a lot
  • 23:05of complexity in that temporoparietal
  • 23:07acceptable kind of area and depending
  • 23:09on what model you throw at it,
  • 23:11it's going to show up differently.
  • 23:13And I think we've seen that
  • 23:14in in some of your work.
  • 23:16And I think that there's a lot of and
  • 23:18that's what I that's why I was talking about.
  • 23:20I feel like we need to do better
  • 23:22in connecting our results and like
  • 23:23seeing you know what results with
  • 23:25different methods map on to each
  • 23:26other because I think that we there
  • 23:28might be some insights in there.
  • 23:30We do more of that.
  • 23:31But yeah I agree that.
  • 23:33There's something really interesting
  • 23:34going on in this area and and,
  • 23:37and I'm not saying that this is
  • 23:38the final answer to it, I think.
  • 23:41Simon.
  • 23:43The interesting thing about that
  • 23:45area is that it's also one of the
  • 23:47most variable areas starting already
  • 23:49from macro anatomy. So the question
  • 23:51is how much in terms of also trust
  • 23:55registration is mismatch may play into them.
  • 23:58It's it's it's always that area
  • 24:00where it seems smarter anatomy,
  • 24:02but rather me sunshine.
  • 24:04Maybe it's just too variable to
  • 24:08really make good statements.
  • 24:10Yeah, I think that kind of so the
  • 24:13question was is that error that
  • 24:14area is also hard to register
  • 24:16and so is that is that is that
  • 24:18fundamentally driving into this.
  • 24:20I think that actually gets back to
  • 24:22your point about is it a compound
  • 24:24or is it something meaningful
  • 24:26because one of the things we did
  • 24:27in in the one of the earlier papers
  • 24:29where I had the video of it.
  • 24:31We did the the we we drove the CCA with
  • 24:34behavior from the warp fields as well.
  • 24:37So the stuff that we throw away usually
  • 24:39and that predicts behavior really well.
  • 24:44And so you're right,
  • 24:45like we're not doing it probably
  • 24:47maybe as good a job registering,
  • 24:48but but maybe once we do,
  • 24:50we throw away the individual
  • 24:51differences that we care about.
  • 24:53And so I don't know the answer to that,
  • 24:54but that's just that with that out there.
  • 24:58And so I guess I have that.
  • 25:06MSN.
  • 25:09Oh yeah, the data are MSM all registered,
  • 25:12so the we did the best we could.
  • 25:16Also like to get the overlapping individuals.
  • 25:21Level from the type of the group to select.
  • 25:25So I think my question is that you
  • 25:27specifically pick the ones that have
  • 25:3017 and like all 12 sessions that you
  • 25:32didn't just like you just want more
  • 25:35sessions or you actually compare that?
  • 25:37Do they have the same kind
  • 25:40of overlap networking?
  • 25:44I'm guessing they also have for the
  • 25:49individual overlaps of consistent
  • 25:51across different sessions must.
  • 25:56Yeah, so the question was,
  • 25:57did we do anything beyond just using
  • 26:00the extra data with the 7T data?
  • 26:03We didn't in this,
  • 26:04we just used it this extra data, but I
  • 26:07think those are interesting ideas to see.
  • 26:10There didn't seem to be any.
  • 26:12Systematic differences with the
  • 26:1417 data with the other data.
  • 26:17Let's go, Ruby.
  • 26:31Yes. So the question is the overlap
  • 26:34kind of relies on the thresholding and
  • 26:36there's different ways you can do it.
  • 26:38So we if if you just look at
  • 26:40the spatial correlation matrix,
  • 26:42so that's just like straight up correlating
  • 26:44with a few the within subjects,
  • 26:46the weighted network.
  • 26:47So that doesn't require any thresholding.
  • 26:52And so we did that for some of the stuff,
  • 26:55yes, if you want to kind of for for the
  • 26:57simulated versions and stuff like that,
  • 26:59you do need to put a threshold.
  • 27:01We put a fairly stringent threshold so that.
  • 27:05Areas had to have yeah, so.
  • 27:09I don't think it changes much with
  • 27:11thresholding and particularly with.
  • 27:13The nice thing about Profumo is that it has
  • 27:15kind of like a binomial distribution with
  • 27:18whether it's sampling a signal or noise.
  • 27:21So it has a relative,
  • 27:22but even though it's weighted,
  • 27:23it doesn't have a lot of kind of
  • 27:25weights in the middle because
  • 27:28of the because of the model.
  • 27:31Period. Are you?
  • 27:38Anatomical problem.
  • 27:43For example. Like more
  • 27:46today?
  • 27:50Will you repeat that?
  • 27:51Yes, the question was have we looked
  • 27:54at the structural connectome and and
  • 27:56how this relates that we are currently
  • 27:59doing a project in the lab trying to
  • 28:01link perfumo networks and individual
  • 28:03differences to to structural networks.
  • 28:05I don't have any big answers from that.
  • 28:08But yeah, I think it is
  • 28:10important area for sure.
  • 28:11We are also planning to do some of this
  • 28:14in monkeys afterwards which I think will
  • 28:17be just monkey MRI data to start with,
  • 28:19which I think would be interesting.
  • 28:20To get at some of the announcement
  • 28:23and related questions.
  • 28:26When the last question and then
  • 28:28I'll make a comment. Go ahead.
  • 28:33Possible. I wondered how you were.
  • 28:42Like the numbers one is
  • 28:45sort of. You can also.
  • 28:49Those Peppers are not
  • 28:51associated with support.
  • 28:56I noticed that you did some post.
  • 29:02Yeah. And so the question is what
  • 29:06are these 20 networks really,
  • 29:08which we did some testing at
  • 29:11different dimensionalities I think
  • 29:13between like 10 and 30 years,
  • 29:1540 to see and and actually with Profumo
  • 29:19you get pretty similar networks if
  • 29:21you go higher because it doesn't,
  • 29:23it doesn't have to splinter down
  • 29:24smaller because it doesn't have
  • 29:26that independence constraint.
  • 29:27So the networks,
  • 29:28these 12 networks are things that
  • 29:29you see that are relatively what
  • 29:31I would say Canonical that you
  • 29:33see with relative reliability.
  • 29:35And they do to some degree map on to so.
  • 29:38So the, the mapping,
  • 29:39the mailing that we did was
  • 29:41based on the wooden paper, the.
  • 29:44We didn't go at the I I told you
  • 29:46this was a sticking my head in the
  • 29:48sand and talked and I was looking
  • 29:49to talk about behaviors of the 20
  • 29:51subject to talk about behavior right.
  • 29:54But it is the case that so from
  • 29:57that video at the start I showed you
  • 29:59like the default mode like and the.
  • 30:02The cognitive networks are the ones
  • 30:03that are the most important for
  • 30:05behavior and that also contribute
  • 30:07a lot to that overlap in that.
  • 30:14Great. Thank you very much everyone.
  • 30:16Comment which you already emphasized
  • 30:19or you already mentioned,
  • 30:20but I want to emphasize which is this
  • 30:22potential for complexity of the Walker level.
  • 30:25And apparently in animal
  • 30:26studies you look at that,
  • 30:27it seems to me that all three of these
  • 30:30hypothesis could be true that the,
  • 30:31you know within a voxel because
  • 30:34there's hundreds of thousands of
  • 30:35neurons in there and they presumably
  • 30:37don't all do the same thing so.
  • 30:39Yeah, that's great,
  • 30:41but I'm glad you got to look at that.
  • 30:45Yeah, it's really hard.
  • 30:48Uh, it's a good question, but uh,
  • 30:50yeah, tough on the answer.
  • 30:52Thank you very much, Janine.
  • 30:53Thanks, everybody. We're.