Skip to Main Content

Bratislav Misic “Towards a biologically-annotated connectome”

March 10, 2023
ID
9641

Transcript

  • 00:05We can just jump right into it.
  • 00:07So I think that over the past 1520 years,
  • 00:11there's been a real focus on
  • 00:13connection patterns in the brain.
  • 00:15We saw this in this meeting throughout,
  • 00:16where people were talking about either
  • 00:18the topology of the connections,
  • 00:20how they can be used to predict
  • 00:22individual differences in behavior,
  • 00:24fingerprint individuals, and so on.
  • 00:26And even though we don't really think this,
  • 00:28if we were asked, we implicitly
  • 00:30assume that the brain looks this way,
  • 00:32that all nodes are the same,
  • 00:34and we abstract.
  • 00:35Way all of microarchitectural information,
  • 00:38because that's part of the
  • 00:40methods that we're using,
  • 00:41but they're really not all the same.
  • 00:43Brain areas are different in
  • 00:45terms of their gene expression,
  • 00:46neurotransmitter receptors,
  • 00:47so on and so forth.
  • 00:49And it's not as though we don't
  • 00:50have access to this data.
  • 00:52We can measure many of these things
  • 00:54in our garden variety MRI protocols.
  • 00:56We can do it with more advanced
  • 00:58imaging methods.
  • 00:59We can also get it from emerging
  • 01:01technologies that are made available
  • 01:03by others elsewhere in the world.
  • 01:05And so,
  • 01:08and this is part of a kind of
  • 01:09a bigger problem or challenge
  • 01:10in the field in my opinion,
  • 01:12where we are continuously generating
  • 01:14reference maps about where different
  • 01:16biological features in the brain reside,
  • 01:18how they differ.
  • 01:19But these maps are often coming in
  • 01:21difficult to compare coordinate systems.
  • 01:23They're being shared ad hoc.
  • 01:25And what I mean by that is your
  • 01:26emailing somebody that you know in
  • 01:27another lab who's maybe used this in
  • 01:29their paper and you want to get it,
  • 01:30but you have actually no idea if
  • 01:32anything's been done to this map.
  • 01:34When it comes back to you and
  • 01:36the question is, well,
  • 01:37if you have a new map that you've made
  • 01:40so you've contrasted patients and
  • 01:41controls or you've done some kind of task,
  • 01:44you want to know whether that map is
  • 01:46enriched for a particular biological feature.
  • 01:48If you're a physician,
  • 01:49you might want to know whether a
  • 01:51particular cell type or neurotransmitter
  • 01:53system is affected in a disease.
  • 01:54So you can maybe design some kind
  • 01:57of therapy or you just want to
  • 01:59have an idea of where to go next
  • 02:01with your how to design your next
  • 02:03set of experiments to go forward.
  • 02:05And um,
  • 02:05we don't really have any kind of
  • 02:07way of doing that in neuroimaging.
  • 02:09Whereas in other adjacent fields,
  • 02:10and I think genomics is always a
  • 02:12good point of comparison for us,
  • 02:14they actually do this very routinely.
  • 02:15They have living ontologies of
  • 02:17reference maps where when you
  • 02:19generate a new data set,
  • 02:21you always compare it to that and then
  • 02:23that gets added into the corpus and
  • 02:25and these things continue to proliferate.
  • 02:27So G profiler is 1 famous example of this,
  • 02:29but there are numerous others
  • 02:31like safe and so on.
  • 02:32So I'm inspired by that.
  • 02:34So we decided to we we had a
  • 02:36a bunch of internal tools in
  • 02:38the lab that we were using
  • 02:39all the time, but we decided
  • 02:41to put them together in this
  • 02:42toolbox that we call neural maps,
  • 02:43kind of like Google Maps for the brain.
  • 02:45My apologies to Mac because of
  • 02:47the the the the globe is wrong.
  • 02:49We'll do a different one
  • 02:51for the Australian market.
  • 02:53And the toolbox, it's a,
  • 02:55it's a Python toolbox and it's a
  • 02:56I'll kind of take you through it, so.
  • 02:58It's very modular in the sense that
  • 03:00you don't have to do this kind of
  • 03:02pipeline that that I'm describing,
  • 03:03but you can use the individual components.
  • 03:06So basically the idea is you could
  • 03:07come in with your favorite brain
  • 03:09map in whatever space we provide
  • 03:10a set of transformations,
  • 03:11so you can go to any other coordinate
  • 03:13space and our imaging that you care about,
  • 03:15we give you a curated library
  • 03:18of different reference maps.
  • 03:21Of of biological features,
  • 03:23we provide a suite of statistically
  • 03:26rigorous tests so you can compare
  • 03:28pairs of brain maps to one another.
  • 03:30And then if you want you can get kind of
  • 03:32enrichment and enrichment score to say,
  • 03:34well this map seems to be enriched for these
  • 03:36particular layers or something like that.
  • 03:38So I'll take you through each of
  • 03:40these kind of piece by piece.
  • 03:42So this is the library.
  • 03:44It contains a number of maps,
  • 03:47some of which are from Mrs.
  • 03:49some from PET to do with microstructure.
  • 03:51For instance,
  • 03:52and as well as my favorite
  • 03:55is synapse density from CBJ,
  • 03:57various maps of metabolism,
  • 03:59again both from MRI and PET.
  • 04:01We have maps to do with cortical expansion,
  • 04:04so across phylogeny and ontogeny
  • 04:06and different dynamics maps.
  • 04:08These are source resolved resting
  • 04:10state Meg maps and different canonical
  • 04:14electrophysiological frequency bands.
  • 04:16You also have the intrinsic
  • 04:18time scale map there as well
  • 04:20from resting state bold we have.
  • 04:21The thicknesses of different cortical
  • 04:23layers from the big brain histological
  • 04:25Atlas and also we can provide you
  • 04:28actually with any derivative of
  • 04:30gene expression that you like.
  • 04:33So for instance you can do cell
  • 04:34type deconvolution from the human
  • 04:36brain Atlas and you get maps of
  • 04:37different cell types as well,
  • 04:39the one and each of the.
  • 04:41I just want to say each of these
  • 04:43maps is in their native space
  • 04:44which which is really cool because
  • 04:46then you can make sure that you're
  • 04:48always comparing apples to apples.
  • 04:50The one data set that we kind of had a.
  • 04:52Slightly bigger hand and and
  • 04:54and producing is this Atlas
  • 04:56of neurotransmitter receptor.
  • 04:57So this was actually a project
  • 04:59led by Justine Hansen,
  • 05:00who's staying right over there.
  • 05:01And she went and contacted a bunch
  • 05:03of pet centers around the world and
  • 05:04asked them to share their data on
  • 05:06different neurotransmitter receptors.
  • 05:08So at the end of the day,
  • 05:08she managed to amass this nice data
  • 05:10set of about 19 different receptors
  • 05:12and transporters across nine
  • 05:14different neurotransmitter systems.
  • 05:15And I'd be remiss not to mention that
  • 05:17since we're being hosted here by Yale,
  • 05:19in a way our biggest contributor was
  • 05:20Rich Carson at the Yale. That center.
  • 05:22And we also have a contribution pipeline.
  • 05:25So this current library
  • 05:27is designed to our taste.
  • 05:28But if you have a map that you think
  • 05:30should be added, we can do that.
  • 05:31Oh, and at the bottom I'm going to
  • 05:33showing you how this would run,
  • 05:34how simple the syntax is in the toolbox.
  • 05:37OK. So those are the maps.
  • 05:39Now if you want to be able to
  • 05:41compare maps to one another,
  • 05:42we've provided you with a
  • 05:43number of transformations.
  • 05:44We did not invent these,
  • 05:45but basically we provide support
  • 05:48for M152 Civic because we're
  • 05:50at the M&IFS average and FSLR.
  • 05:52So if you want to go volume to surface,
  • 05:55we have registration fusion.
  • 05:56Thank you, Thomas.
  • 05:57If you want to go among
  • 06:00the different surfaces,
  • 06:01we've implemented MSM and we do
  • 06:03a little bit of benchmarking in
  • 06:05the paper you can see but these.
  • 06:07These really are as good as advertised.
  • 06:10So they're there for you and
  • 06:12we can participate as well.
  • 06:13So you can provide any Atlas that
  • 06:14you like and you can still spit out
  • 06:16particulated data for you as well.
  • 06:18Last but not least,
  • 06:19so I,
  • 06:20you know I started out with biology
  • 06:21and all this,
  • 06:22but it's really just a Trojan horse,
  • 06:23so I can talk about spatial knolls
  • 06:25and and spin tests.
  • 06:28Basically, anytime you want
  • 06:30to contextualize brain maps,
  • 06:31it's going to come down to you comparing,
  • 06:33like doing a spatial
  • 06:35correlation between 2 maps.
  • 06:37And I'll give you an example of
  • 06:38how we actually got into this.
  • 06:39So our group actually mostly studied
  • 06:42structure function relationships
  • 06:43and we had this method a few years
  • 06:45ago that we developed where you
  • 06:46can compare the structural and the
  • 06:48functional connectivity profile of a
  • 06:51node to estimate structure function
  • 06:53coupling in different brain regions.
  • 06:55And we what that map is showing is it's
  • 06:58actually emphasizing areas that have
  • 06:59very low structure function coupling.
  • 07:01So they kind of look like the you're
  • 07:03going to transmodel or association cortex.
  • 07:05And we've got very excited because it
  • 07:08obviously looks a lot like the principal
  • 07:10functional gradient from Dan Margulis.
  • 07:12And when you correlate them,
  • 07:13you see this nice negative correlation.
  • 07:14So we got excited, sent the paper and the,
  • 07:17the reviews come in three reviewers
  • 07:19and they say, oh, you know,
  • 07:20you should fix this and that.
  • 07:21And it was very, very doable,
  • 07:23very constructive.
  • 07:23But then two days later I get an
  • 07:25e-mail from the editor saying,
  • 07:26well, actually there was a fourth
  • 07:28reviewer on this reviewer.
  • 07:29You know, he's running a bit late,
  • 07:31but they'd really like
  • 07:32you to incorporate these,
  • 07:33these suggestions and that reviewer
  • 07:35was much less positive, so.
  • 07:36The reviewer said cute story,
  • 07:39but everything that you're doing is wrong.
  • 07:41And what he or she meant was that
  • 07:43we had reported these hilariously
  • 07:45overinflated P values with our core when
  • 07:49we had these correlation coefficients.
  • 07:52And we just thought we were really good at
  • 07:55science and our result was that strong.
  • 07:57But the point was that you see these,
  • 07:59these points in this scatterplot,
  • 08:01these are all brain regions.
  • 08:02They're coming from a system
  • 08:04that's spatially embedded,
  • 08:04spatially contiguous.
  • 08:05So there's a certain amount of
  • 08:07just naturally occurring spatial
  • 08:09autocorrelation and also some spatial
  • 08:11autocorrelation that's imposed by
  • 08:13the way that data is processed.
  • 08:14And and because the points are
  • 08:17not independent,
  • 08:17that means that they're violating a
  • 08:20basic assumption both of the parametric.
  • 08:22You know,
  • 08:22test.
  • 08:23But also if you were to do a
  • 08:24permutation test,
  • 08:25you know the points are no longer
  • 08:27exchangeable.
  • 08:27So this is just a general problem
  • 08:30where if you're comparing
  • 08:32spatially autocorrelated maps,
  • 08:34you're going to get inflated values.
  • 08:35And just to kind of show you
  • 08:37exactly how this would work here,
  • 08:38what we're doing is we're generating,
  • 08:39we generated a pair of completely
  • 08:42random brain maps on the left there.
  • 08:44And then we progressively smooth them.
  • 08:47As you start to smooth them,
  • 08:49you go from a correlation OF0
  • 08:50to a very large correlation.
  • 08:53So the idea here is that when you
  • 08:54have greater spatial autocorrelation,
  • 08:56you're going to have fewer true
  • 08:58degrees of freedom and you're going
  • 08:59to get spurious correlations.
  • 09:01So how do you deal with this?
  • 09:02Around that time there were two
  • 09:04kind of broad families of methods
  • 09:06that have been proposed.
  • 09:07One, the famous spin test
  • 09:09from Aaron Alexander Block,
  • 09:10where you take your map of annotations
  • 09:13you projected to a a sphere.
  • 09:15You apply an angular rotation
  • 09:16and bring it back to the surface.
  • 09:18So now you've created a new map where
  • 09:20the the annotations have been permuted,
  • 09:23but but the spatial autocorrelation
  • 09:25has been preserved.
  • 09:26And the other family of methods,
  • 09:27the what I'll call in in
  • 09:28the next couple of slides,
  • 09:29parameterized nulls or generative nulls.
  • 09:32Have to do for instance this
  • 09:33is work from Josh Burton,
  • 09:34John Murray,
  • 09:35where you actually estimate the variogram
  • 09:38in in your map and you then try to you.
  • 09:41You then generate a completely
  • 09:43permuted map and try to,
  • 09:45through a series of kind of
  • 09:47blurring and scaling steps,
  • 09:48try to recapitulate the variogram
  • 09:49so you have a map that has the
  • 09:52same spatial autocorrelation.
  • 09:53Big shout out to Michael paper
  • 09:55there who actually was way ahead
  • 09:57of the curve and had a paper on
  • 10:00this about 15 years ago or more.
  • 10:02The the famous wave strapping
  • 10:04method that actually kind of is,
  • 10:05is kind of in between these two extremes.
  • 10:08But we and if you're interested
  • 10:11in this kind of work,
  • 10:13we recently wrote a review on
  • 10:14on all models and within our
  • 10:16science more generally.
  • 10:17But what we wanted to know was that
  • 10:19when we had to address these reviews.
  • 10:22There were at least 10 different
  • 10:23methods that have been proposed in
  • 10:25the literature and we were interested
  • 10:26in how they compared to one another.
  • 10:28So this is just to show you the the
  • 10:31spin test method method from Aaron.
  • 10:34It's very clear what to do when you're
  • 10:36working at the surface with a dense surface,
  • 10:39but when you have a particulated data
  • 10:41set you have to make hard decisions
  • 10:42about what happens when the medial
  • 10:44wall comes around and into cortex
  • 10:45and hits and hits a parcel so.
  • 10:50Long story short, and about a
  • 10:51year since that paper came out,
  • 10:52there were a number of different
  • 10:54implementations that all dealt
  • 10:55with this problem differently,
  • 10:57and also there were a number of
  • 10:59different parameterized models as well.
  • 11:00So we wanted to know how well do
  • 11:02they compare and how well can
  • 11:04they control family wise error.
  • 11:06And the way that we did this is we
  • 11:08would generate a pair of brain maps,
  • 11:10we would correlate them and then
  • 11:12we would take one of the maps.
  • 11:13For instance in this example, the Y map,
  • 11:15we would apply one of these nulls to it,
  • 11:17so spin it around for instance,
  • 11:19recompute the correlation coefficients,
  • 11:22get distribution of these Nell correlations
  • 11:25and then estimate A2 tail P value.
  • 11:28And I can't believe that I'm actually so.
  • 11:30Ignore what's written there.
  • 11:31The the greater than or equal.
  • 11:33We're looking for more extreme
  • 11:35values than our empirical value.
  • 11:38Yeah.
  • 11:38And you can normalize that by 1000
  • 11:40to get app value and then obviously
  • 11:41the number of times that you get a
  • 11:43P value that's less than .0 O five,
  • 11:45that's your family wise error.
  • 11:47So the way that we did this,
  • 11:49we tuned the spatial autocorrelation
  • 11:51and the parcellation and the
  • 11:53Parcellation resolution.
  • 11:54So we we had these simulations where
  • 11:57you place Gaussian random fields on
  • 11:59a grid on a lattice and you project
  • 12:02them to the FS average surface.
  • 12:04So here we're shooting the amount
  • 12:06of spatial auto correlation and
  • 12:07a map you can see an example of.
  • 12:09Like a couple of brain maps on the right.
  • 12:11This is just a sanity check showing
  • 12:14the this is a probability distribution.
  • 12:17Of the correlation coefficients
  • 12:18that you get when you tune up the
  • 12:21spatial auto correlations,
  • 12:22you can see is the spatial
  • 12:23autocorrelation goes up,
  • 12:24you start to get these
  • 12:25very wide distribution,
  • 12:26very large magnitudes of of correlations
  • 12:29and so here we come to the to the test.
  • 12:32So what you're seeing here on the
  • 12:34Y axis is the false positive rate
  • 12:36and on the X axis you're seeing the
  • 12:39spatial autocorrelation which you
  • 12:40can see is that the spatially naive
  • 12:43knows that are in this kind of.
  • 12:45Purplish color.
  • 12:46They immediately shoot off into space,
  • 12:49never to be heard from again.
  • 12:50They have extraordinarily high
  • 12:53false positive rates.
  • 12:55The.
  • 12:55Spatially aware nulls tend to keep
  • 12:58things under control up until some
  • 13:01fairly large amount of spatial correlation,
  • 13:04but they are also subject to
  • 13:06false positive rates and which are
  • 13:07also seeing is that generally,
  • 13:09and this is true both for dense
  • 13:11surfaces and for any parcellation
  • 13:13and Atlas that you can come up
  • 13:16with generally the the spin based
  • 13:18novels tend to be a little bit more
  • 13:20conservative than to parameterize now so.
  • 13:22What we concluded here was that
  • 13:23these national models are completely
  • 13:25inappropriate for significance testing.
  • 13:27You should not be using them,
  • 13:29but the choice of the spatial will
  • 13:30obviously depend on the context.
  • 13:31So the spin tests are obviously
  • 13:35statistically accurate,
  • 13:36but a little bit more statistically accurate.
  • 13:38But you can only use them
  • 13:39when you have surfaces.
  • 13:40So if your problem your question involves
  • 13:43the volume subcortex or vellum or whatever,
  • 13:46you're going to have to use
  • 13:47the parameterized now,
  • 13:48but they'll come at a cost of
  • 13:49some increased Type 1 errors.
  • 13:51The good news is though.
  • 13:52That these panels are generally
  • 13:54parcellation resolution and variance,
  • 13:56you have to worry about that.
  • 13:57Anyway.
  • 13:58Long story short,
  • 13:59we implemented all of these in in
  • 14:02neural maps so you can and we have
  • 14:04guidelines on when you should use which one,
  • 14:07so they are there for you as well.
  • 14:09So these are the kind of components of
  • 14:11neural maps and just to kind of give you
  • 14:14an example of how this might be used.
  • 14:16What we did was to take data
  • 14:20from the Enigma consortium.
  • 14:21So these are cortical thinning
  • 14:24maps for 13 different diseases.
  • 14:26And what Justine did here was to try
  • 14:29to predict how well you can well to
  • 14:33to see if combinations of different
  • 14:36receptor maps can be used to predict
  • 14:38the cortical thinning pattern.
  • 14:39So which cortical thinning patterns are
  • 14:42associated with which receptor types?
  • 14:44And what you're seeing here
  • 14:45is basically that.
  • 14:46We recapitulate,
  • 14:46for instance,
  • 14:47some well known hits such as the
  • 14:50serotonin transporter and in a
  • 14:51number of these psychiatric diseases.
  • 14:53But then you also get novel hits that
  • 14:56could potentially be explored further on in,
  • 14:58in and follow up work that oftentimes
  • 15:01they're not necessarily textbook but
  • 15:03do have a literature behind them.
  • 15:06Anyway, so the it's very easy to install.
  • 15:08Please use it and let us know
  • 15:10if we did anything wrong.
  • 15:11And and we have a team of about
  • 15:13three or four people who are
  • 15:15looking at issues as they come up.
  • 15:16OK,
  • 15:17so I'm just going to do like a this is
  • 15:19the tool and then how you can get the data.
  • 15:22And I'm just going to do a quick
  • 15:24lightning round over the next 5 minutes
  • 15:25of what you can actually do with this.
  • 15:27So you've got your map of annotations,
  • 15:29you've superimposed it on your connectome,
  • 15:31and you've put the annotations in each node,
  • 15:34and I have an annotated connect them.
  • 15:36What can you do with it?
  • 15:37What kind of questions does this open up?
  • 15:39This is an example from Vince,
  • 15:42business work in in the group
  • 15:45who looked at Assortativity.
  • 15:46So Assortativity,
  • 15:47broadly speaking,
  • 15:48is the tendency for two nodes to
  • 15:50be connected with one another if
  • 15:52they have similar annotations.
  • 15:53Now for some reason and network neuroscience,
  • 15:56there's been this tunnel vision of
  • 15:58always focusing on only one annotation,
  • 16:00which is degrees how many
  • 16:02connections there are.
  • 16:02So are two nodes with very
  • 16:04similar numbers of connections
  • 16:05connected with one another.
  • 16:06But obviously you can ask this for
  • 16:08any type of biological annotation,
  • 16:10and that's what he did.
  • 16:12He did this for structure and
  • 16:13functional data and the human,
  • 16:15and also in a variety of other.
  • 16:17Model organisms and what I'm
  • 16:19gonna show you one example
  • 16:21from the results, which is that these
  • 16:23are a little bit difficult to read,
  • 16:24but what you're seeing is on the Y axis is
  • 16:26the assortativity of different annotations,
  • 16:29and on the X axis he's pruning away
  • 16:32progressively longer and longer connections.
  • 16:35So on the left hand side you always have
  • 16:37the full connectome and on the right
  • 16:39hand side you have a connection with
  • 16:41just which which just retains very long
  • 16:43distance connections which you can see.
  • 16:44First of all it's a bit of a shock to the
  • 16:46system is that at the very beginning.
  • 16:48Not always do you even have positive
  • 16:50assortativity, which runs counter
  • 16:52to some very famous theories.
  • 16:53But also you see that when you only
  • 16:55have long distance connections,
  • 16:57you have actually very
  • 16:59disassortative connectomes,
  • 16:59which is kind of a cool result because
  • 17:01we often think about what is the point
  • 17:02of having these very long distance
  • 17:04connections that are metabolically
  • 17:05costly take up a lot of space.
  • 17:06And what you're seeing here is that
  • 17:08their job is to connect and to promote
  • 17:11communication between neuronal populations
  • 17:12that are distinct in terms of their
  • 17:15underlying side to architecture.
  • 17:17They serve to diversify.
  • 17:18The type of signaling that you
  • 17:20get in your network.
  • 17:23Also,
  • 17:23you can play this type of game with paths.
  • 17:27So shortest paths are one of the most
  • 17:30fundamental concepts in graph theory,
  • 17:32and they are actually part of many
  • 17:35other statistics that we routinely use,
  • 17:38such as the between the centrality of a node,
  • 17:40you know how many shortest paths go
  • 17:41through a particular node and so on.
  • 17:43But we always take averages of them.
  • 17:46For some reason we always say what is
  • 17:48the mean shortest path in my network,
  • 17:50you know,
  • 17:51when you're looking at the small world.
  • 17:53Index or something we hardly ever consider
  • 17:56actually what is the route that a path takes.
  • 17:58But we should because obviously
  • 18:00as a signal travels through a
  • 18:03series of neural populations going,
  • 18:04it's going to be transformed in some way.
  • 18:06So where it goes is actually very important.
  • 18:09And what we were doing here is we
  • 18:10would take a functional connectivity,
  • 18:12recompute this beautiful unimodal
  • 18:14transmodel gradient from Dan Margolis
  • 18:17and we would annotate all the different
  • 18:20communication pathways in the network.
  • 18:22So we would ask along a
  • 18:24communication pathway,
  • 18:25do you go through unimodal transmetal cortex?
  • 18:27And what you could do here is you can
  • 18:29ask then through these path motifs what
  • 18:31a signaling look like in the brain.
  • 18:33And what we showed in this paper is
  • 18:35that the majority of communication
  • 18:37pathways in the human brain strictly
  • 18:39follow a very simple either bottom
  • 18:41up signaling kind of trajectory
  • 18:43or top down signaling trajectory.
  • 18:45The very few very small proportion
  • 18:47of paths that ever change direction
  • 18:49change direction in the attention.
  • 18:52Networks,
  • 18:52which is really cool because it says
  • 18:54that there's something about the
  • 18:56anatomical connectivity of attention
  • 18:57networks that predisposes them to change
  • 18:59the nature of signaling from top down
  • 19:01to bottom up or bottom up to top down.
  • 19:03Anyway, you can take a look at
  • 19:04the paper if you are interested.
  • 19:06We were also, Justine,
  • 19:08also looked at whether we can do better
  • 19:11at predicting functional connectivity
  • 19:12from structural connectivity.
  • 19:14So here at the bottom you're seeing
  • 19:15how well we can predict function from
  • 19:17structure and different brain areas.
  • 19:18This is what I showed a little bit before,
  • 19:20where the prediction is highest in unimodal.
  • 19:23Cortex and lowest in transmetal cortex.
  • 19:25And what she said was,
  • 19:26OK well what if *** **** structural
  • 19:28connectivity plus similarity
  • 19:30of receptor profiles.
  • 19:32I can't how well do you do at
  • 19:33predicting functional connectivity.
  • 19:34What you're seeing here is the,
  • 19:36the, the line there,
  • 19:37that's the identity line you
  • 19:38can see for most brain areas,
  • 19:40you actually do a lot better when
  • 19:42you incorporate information about
  • 19:43neurotransmitter receptors, for instance.
  • 19:45So I think that the folks here
  • 19:47who study dynamical models,
  • 19:48neural masses and so on can
  • 19:51obviously see the the utility of.
  • 19:53Having more biological detail and and
  • 19:56more veridical by physical models as well.
  • 19:59Even though this is just a
  • 20:01very simple statistical model.
  • 20:02Yeah. OK.
  • 20:03And then just the last little bit,
  • 20:06we also do a lot of dynamical
  • 20:08models of how neurodegenerative
  • 20:09diseases spread and brain networks.
  • 20:12So in these models which you're trying
  • 20:15to to to model is how misfolded proteins
  • 20:17spread from cell to cell and how
  • 20:20they accumulate and cause cell death.
  • 20:22So these simulations might look something
  • 20:23like this where you have a connectome,
  • 20:25you have a spreading process in
  • 20:27the connectome and you're trying
  • 20:28to predict patterns of cortical
  • 20:30thinning in a particular disease,
  • 20:31in this case Parkinson's, what?
  • 20:33We find in these models time
  • 20:35and time again is that if you
  • 20:38incorporate information about.
  • 20:40The underlying biological vulnerability,
  • 20:42so in this case for instance,
  • 20:44gene expression to do with the two
  • 20:46genes that we know to be very well
  • 20:49associated with Parkinson's disease.
  • 20:50We always do better.
  • 20:52So what you're seeing here is that.
  • 20:54On the Y axis is model fit how well we
  • 20:56can predict cortical thinning and PD.
  • 20:58The X axis is just slightly different
  • 21:00ways of constructing the network,
  • 21:02but in blue is a model where
  • 21:04all nodes are the same.
  • 21:05It's just a diffusion process
  • 21:07of misfolded proteins.
  • 21:08In red,
  • 21:09it's a diffusion process that's being
  • 21:11guided by regional differences in
  • 21:13the expression of these two genes
  • 21:16that have to do with Parkinson's.
  • 21:18And I think you're going to see
  • 21:20a very similar principle in
  • 21:22Thomas's talking after the break.
  • 21:24And then you can actually play this
  • 21:25game for for lots of other diseases.
  • 21:26This is really cool.
  • 21:28It's a recent paper where we did
  • 21:29this for the specific genes that
  • 21:31have to do with the different
  • 21:33mutations in the genetic variant,
  • 21:35the different genetic variants of
  • 21:37frontotemporal dementia as well.
  • 21:38And then so it, you know,
  • 21:41I've been slowly kind of
  • 21:42converging to this idea that,
  • 21:43you know,
  • 21:43biological annotations are very important.
  • 21:45So that's kind of just
  • 21:46the last thing I'll show
  • 21:47you is we wanted to ask, OK, well,
  • 21:50is it the local biological annotations,
  • 21:53so instance, for instance,
  • 21:54information about gene expression,
  • 21:55receptors, metabolism and so on.
  • 21:58Is that more important for
  • 22:00predicting the spatial patterning of
  • 22:02different diseases compared to say,
  • 22:04connect atomic features, so things
  • 22:06that have to do with the connection?
  • 22:08Profiles of a node.
  • 22:09And what Justine did here was to
  • 22:12kind of put them together in a,
  • 22:14in a, in a, in a molecular
  • 22:16versus Connectomics death match.
  • 22:18And what she's showing is that here
  • 22:20you're seeing how well we can predict
  • 22:23each of these disease patterns.
  • 22:26And what you're seeing is that generally
  • 22:28the the biological features tend to
  • 22:31do better than connectomics features,
  • 22:33but actually they they contribute
  • 22:35different sources of variance.
  • 22:36So you can actually even combine
  • 22:38them to to to to do even better.
  • 22:41So I have no concluding slide
  • 22:45except to say that.
  • 22:47I think that the, the,
  • 22:49the, the, the data,
  • 22:51the opportunity to use
  • 22:53biological annotations is there.
  • 22:54We've made some small contributions
  • 22:56to providing the methods to do that.
  • 22:58But I think if we start thinking about this,
  • 22:59we can really increase the breadth
  • 23:01of our questions and we can
  • 23:03bring our investigation closer
  • 23:05to the underlying biology.
  • 23:06So thank you for your attention
  • 23:07and happy to take questions.
  • 23:16Great.
  • 23:28They're a group average, yeah.
  • 23:32Oh yeah of course.
  • 23:33I mean I yeah we we just wanted to
  • 23:35to have as large a swath as possible
  • 23:37when we when we were doing this so
  • 23:40we opted for Enigma but you know you
  • 23:42kind of you live by Enigma and you
  • 23:44die by Enigma so you know it's it's
  • 23:46it's cortical thickness only it's
  • 23:48it's this very coarse parcellation
  • 23:51you know according to Jessica Kiliani
  • 23:54especially for psychiatric diseases
  • 23:55for many of them we you know it's
  • 23:57it's a good question of whether you
  • 23:58should even be looking at cortical
  • 24:00thickness whether that's the
  • 24:01phenotype you should be looking at.
  • 24:02We're looking at something like functional
  • 24:04connectivity or something like this.
  • 24:06So yeah point taken but definitely we
  • 24:09can we can do this for for much better
  • 24:12disease phenotypes and the individual.
  • 24:30Like button. Oh.
  • 24:34Maybe it's a misnomer. Actually.
  • 24:36Global just refers to the fact
  • 24:38that you couldn't have gotten this
  • 24:40information without actually having
  • 24:42knowledge of the entire connectome.
  • 24:44So even just something as simple as
  • 24:46degree where we're just counting the
  • 24:47number of connections around the node,
  • 24:49we call that global, but it's.
  • 24:51It's really just a.
  • 24:54A connectome wide kind of index.
  • 25:00Not necessarily.
  • 25:00I mean we we think that,
  • 25:02but there's, you know,
  • 25:03something that you can compute
  • 25:05directly from the connection
  • 25:06patterns and versus something that
  • 25:07you get from microarchitecture.
  • 25:09So that's that's the comparison.
  • 25:15It's really a wonderful tool.
  • 25:18Most of what you showed was going right now.
  • 25:21Doesn't work? Or what can you offer
  • 25:23if you have actually just as fast
  • 25:25like the one that you want to correct?
  • 25:28It's, it's very straightforward.
  • 25:30You apply a parcellation
  • 25:31and you can apply it to.
  • 25:33I mean you just, yeah, say that you have
  • 25:36an ROI from a study you know what a.
  • 25:39Yeah, yeah, it's, yeah,
  • 25:41because we provide,
  • 25:42I mean I I showed a lot of particulated maps,
  • 25:44but honestly,
  • 25:45they're all dense maps actually.
  • 25:47And so you can just apply the
  • 25:49same parcellation to these
  • 25:51annotations and off you go.
  • 25:52You don't really need to,
  • 25:54to do anything special there.
  • 26:00Respond. Well, reviewer #4 like actually,
  • 26:04I mean I think this person because that
  • 26:07that person just opened up our eyes to
  • 26:10something that was very important and
  • 26:12this was very inspiring and we kind
  • 26:15of we ended up doing research on it,
  • 26:16you know, like. It's kind of cool.
  • 26:19I'm glad that they said that because,
  • 26:20you know. Then we didn't end up
  • 26:24reporting those funny P values.
  • 26:26The peer review works regardless
  • 26:29of what Twitter says.
  • 26:33And.