Skip to Main Content

The molecule meets the computer chip

Yale Medicine Magazine, 2005 - Spring

Contents

Harnessing the power of computers is essential as biologists work to decipher mountains of new data.

It is difficult to imagine two places more different than The Eagle Pub and Celera Genomics, each of which provided the setting for signal events in modern biology. The cozy Eagle, in Cambridge, England, was already a smoky anachronism fitted with burnished brass and dark wood in February 1953, when Francis H.C. Crick famously burst through the door to inform James D. Watson that they had jointly deciphered “the secret of life”—the structure of DNA. Nearly 50 years later, in Celera’s sterile, starkly lit “sequencing rooms,” and in similar rooms at institutions responsible for the government-sponsored Human Genome Project, the complete human genome was painstakingly unraveled by row upon row of humming computers.

The pub and the sequencing room are apt metaphors for the vast changes wrought by Watson and Crick’s discovery, which unleashed a torrent of research in molecular biology that has revolutionized our understanding of evolution, physiology and disease. Watson and Crick confronted a blank slate, but today’s scientists are awash in a fast-moving river of information so thick with possibility that the American Association for the Advancement of Science recently felt compelled to sponsor a symposium for biologists called “Inundated With Data.”

The collective efforts of the world’s scientists have allowed us to construct diagrams of intracellular signaling pathways that would make a New York subway official blanch, and the complete genomes of over 100 organisms are now in hand. However, biologists have been so busy amassing fine details that they have had little time or incentive to step back from the bench, take a breath and begin to grasp the essential patterns in the big picture.

Luckily, computing power has increased in tandem with biological knowledge at an exponential rate, setting the stage for the recent emergence of the cutting-edge, multidisciplinary field of computational biology. The field embraces genomics and proteomics (the latter aims to catalog the complete inventory of proteins encoded by genomes), but also promotes computational modeling of intracellular processes and cell-cell interactions, as well as the “high-throughput” data-mining techniques of bioinformatics, which can unveil common mechanisms underlying seemingly diverse diseases and compare genomes to discern subtle evolutionary relationships among organisms.

In 2003, Yale established an interdisciplinary Ph.D. program in computational biology and bioinformatics. That same year, the university’s Biological Sciences Advisory Committee (BSAC), under the leadership of H. Kim Bottomly, Ph.D., professor of immunobiology, began a study which concluded that computational approaches would play a central role in 21st-century biology. The committee has just produced its final report, a blueprint for Yale to stay ahead of the curve in faculty recruitment, funding and facilities.

The committee’s efforts also led to “A Look to the Future,” an October symposium at the medical school’s Anlyan Center chaired by Perry L. Miller, M.D., Ph.D., professor of anesthesiology and director of the Center for Medical Informatics; Mark B. Gerstein, Ph.D., the Albert L. Williams Associate Professor of Biomedical Informatics and associate professor of molecular biophysics and biochemistry; and William L. Jorgensen, Ph.D., the Conkey P. Whitehead Professor of Chemistry.

The symposium assembled seven top researchers from around the world who use computational approaches to attack a variety of biological problems, from untangling phylogenetic relationships between species to manipulating gene sequences to create completely new proteins and enzymes with customized biological functions.

In addition to providing a forum for these scientists to present their latest work, the symposium also included several informal brainstorming sessions where Yale scientists and administrators learned how the speakers’ home institutions have risen to the structural and organizational challenges of integrating computational biology into teaching and research.

The BSAC report argues that Yale must increase its research and teaching strengths in computational biology and bioinformatics. It recommends the creation of thematically oriented clusters of faculty at the medical school and on Science Hill. And it proposes the formation of a university-wide center for computational biology and bioinformatics that would foster campuswide interactions of faculty and trainees and provide administrative support.

No one doubts that computational biology is here to stay. “The old view was that biologists were the scientists who didn’t like to think quantitatively,” said Carolyn W. Slayman, Ph.D., Sterling Professor of Genetics and deputy dean for academic and scientific affairs. “Now, biologists must face up to the fact that there’s going to be a demand for greater computational skill and greater expertise in informatics than ever before—it’s the future of the field.”

Previous Article
Yale and New Haven join in pilot program for treating HIV/AIDS in Russia
Next Article
New MBA program to help health professions with the business of medicine