After six months of social distancing, many of us are familiar with tools like Zoom and Google Meet that allow us to gather online with our colleagues, family, and friends. Children do their schoolwork remotely while their parents host Zoom work meetings in the next room. Virtual graduations and weddings have become commonplace, allowing us to continue to take part in the rituals that give meaning to our lives.
Observing the world virtually, although new to many of us to the extent we’ve experienced it during the COVID-19 pandemic, has been a staple of medicine for over a century. Ever since Wilhelm Conrad Röntgen produced the first X-ray image in 1895—famously, a radiograph of his wife’s hand—radiology has become one of the most technologically advanced fields in medicine, with new imaging methods far surpassing Röntgen’s X-ray vision. Today’s technologies provide the ultimate in virtual viewing. With an X-ray, a doctor can see a patient’s broken bones. Mammograms allow radiologists to pinpoint the location of breast tumors. And a CT scan can show the blood vessels that feed a tumor—all without cutting into the body.
Scientists, meanwhile, are developing tools for analyzing data sets across subject matter and using machine learning to find patterns on imaging scans that radiologists might not see with the naked eye. In the realm of neuroimaging, a new and unique technology developed by Yale University researchers that combines calcium imaging and magnetic resonance imaging (MRI), gives scientists—and us—a first-ever virtual view of the brain at work in real time. To capture the activity of single neurons as well as widespread patterns of network activity across the brain, the researchers combined several imaging technologies that focus on three scales of imaging, which they refer to as microscopic, mesoscopic, and macroscopic.
“There haven’t before been tools that link the microscopic to the macroscopic,” said Todd Constable, PhD, professor of radiology and biomedical imaging and of neurosurgery at Yale School of Medicine.
The Yale study was led by Constable; Michael Crair, PhD, the William Ziegler III Professor of Neuroscience, professor of ophthalmology and visual science, and vice provost for research; and D.S. Fahmeed Hyder, PhD ’95, professor of radiology and biomedical imaging and of biomedical engineering. The team also included Michael Higley, MD/PhD, associate professor of neuroscience and a member of Yale’s Kavli Institute for Neuroscience; Jessica Cardin, PhD, associate professor of neuroscience; and Evelyn Lake, PhD, assistant professor of radiology and biomedical imaging. The study was funded by the National Institutes of Health BRAIN Initiative.
Learning how the brain works in real time allows researchers to better understand such disorders as autism and such diseases as Parkinson’s and Alzheimer’s. The questions that researchers can answer include, “How does the normal brain work?” Constable said. “And then, how does it not work properly in disease?” Constable and his team worked on the portion of the study that focused on mesoscopic imaging, using calcium optical imaging to look at the surface of the brain, and full-brain macroscopic imaging using functional magnetic resonance imaging (fMRI).
“There are many types of neurons in the brain, and fMRI is sensitive to all of them,” Constable explained. “But calcium imaging is very discriminating, and we can label specific populations of neurons. And so, with the combination of optical imaging and MRI, we can learn what neurons are actually contributing to the MR (magnetic resonance) signals that we see. So it has a direct translation to the virtual human brain—we’re seeing the human brain at work and understanding better what’s causing the signal changes that we see, and what is driving the signal changes,” he said.
Evelyn Lake came to the Yale School of Medicine in 2016 to work with Constable and is now moving the research forward by applying it to specific disease models. “What the technology enables us to do is to learn a little bit more about what we call the BOLD signal,” said Lake. The signal—BOLD stands for blood oxygen level-dependent—is a method used in fMRI to observe which areas of the brain are active at a given time. “If we can learn a little bit more about that signal using preclinical models, then we can infer something about what we observe in a patient,” Lake said.
For example, neuroscientists could determine whether there is an imbalance emerging in Alzheimer’s disease, Lake said. “There are neurons that are responsible for increasing brain activity, and they are usually modulated by inhibitory neurons that decrease activity, as the name would suggest,” Lake said. “This is a very delicate balance in the brain.”
Lake was the first author of a scientific paper published in November 2020 that describes how her team was able to use mesoscopic calcium imaging and fMRI simultaneously after designing and building an MR-compatible optical device—the star feature of which is a 15-foot fiberoptic bundle containing over two million fibers.
Lake hopes the new technology will be applied to other research projects. “This is a unique technology. Others have tried to make it work and no one’s come close to how we managed to do this,” she said. “A lot can be accomplished using this method, which I hope will motivate other researchers to build their own version of it. The technology can be applied to many different studies in many different directions.”