Skip to Main Content

Using AI and Machine Learning for a More Accurate Prostate Biopsy

May 10, 2022
by Jane E. Dee

John Onofrey, PhD, is on the vanguard of a new kind of precision medicine: using AI and machine learning to make up for the limitations of imaging technology and improve cancer detection.

Prostate cancer is the second leading cause of cancer death in men in the U.S.—which means that when a malignancy is suspected, getting an accurate prostate biopsy is crucial. Yet the standard method for urologists to biopsy for the disease is not as accurate as it could be.

Typically, a urologist will use trans-rectal ultrasound (TRUS) imaging technology to guide the needle as they do the biopsy. “But with ultrasound imaging, cancerous lesions in the prostate don’t tend to show up well,” said Onofrey, assistant professor of urology and of radiology and biomedical imaging.

To make up for the limitations of the ultrasound image, a urologist will generally take 12 small biopsy samples along the prostate in a 4-by-3 grid. That may seem all-encompassing, but it accounts for less than 0.5% of the volume of the prostate, explained Onofrey. “Some researchers have likened this process to a game of chance in terms of whether this technique is actually going to be able to detect cancer in a clinically significant way.”

Blame the nature of human anatomy—and of the prostate. “It’s a squishy gland,” said Onofrey. On an ultrasound, it looks different depending on how a patient moves, breathes or whether their rectum is filled with air or with stool.

“That means the same features you see in an MRI show up looking completely different on an ultrasound—there’s not a one-to-one correspondence there,” said Onofrey. “Think of the prostate as a rubber ball that you can flatten out—the urologist is often left trying to align one image from the MRI machine with a totally different ‘deformed’ image from the ultrasound. Even though the images come from the same person, they look nothing alike.”

Overcoming the Tricky Prostate Gland with AI and Machine Learning

That’s where Onofrey’s research comes in. With the right algorithms, he and his postdoctoral advisor, Xenophon Papademetris, PhD, professor of radiology and biomedical imaging and biomedical engineering at Yale’s School of Engineering and Applied Science, surmised that it might be possible to predict those differences—and ignore them, taking them out of the equation altogether so the two images align. To that end, Onofrey developed an algorithm based on ultrasound data from the biopsies of more than 100 patients. “Our aim was to come up with a model that could withstand the errors in the ultrasound—that would be robust in spite of those errors.”

The model they came up with, known as a non-rigid “deformation model,” acts as a template prostate, one that serves as a kind of atlas “of how the prostate changes shape between the MRI image and the ultrasound without worrying about someone’s prostate actually changing shape,” he explained. In a 2017 NIH-funded study published in the journal Medical Image Analysis, Onofrey and Papademetris, along with their co-authors, found that this model was “significantly robust to increasing levels of noise”—meaning that it was able to set aside inaccuracies in the image due to the everchanging shape of the prostate gland.

More recently, in a 2019 pilot study with 20 patients at Yale and 20 patients at Stanford University, Onofrey tested his model in the clinic, alongside men undergoing prostate biopsies. For this, Onofrey collaborated with Preston Sprenkle, MD, associate professor of urology, who performs image-guided biopsies weekly at Yale.

“To our knowledge, it’s the first time anyone has been able to test an algorithm right in the clinic,” Sprenkle said. The advantage is that urologists can see, in real-time, if what they come up with manually matches the model.

A Prostate Biopsy That’s Not a Game of Chance

This model benefits patients by helping to ensure that urologists take their biopsy samples from the correct sections of the prostate gland, rather than the game-of-chance approach. During the pilot study, when Onofrey’s computer-generated model and the fusion model were in agreement as to the likely location of a lesion, “the urologist felt very comfortable sampling that area,” he said. And when the two methods yielded wildly different results? “That’s when the urologist could conceivably sample a wider area.”

Onofrey eventually hopes to use the data from hundreds and even thousands of prostate images to improve the accuracy of his model—and to expand its use. “There’s real power in using AI and machine learning -- not to replace human beings, but to help them make more accurate decisions.” This kind of work, he said, gets to the heart of personalized, precision medicine. “It would be great to one day be able to predict patient outcomes based on what their personal history looks like, and even to be able to figure out what side effects someone might experience.”

For the time being, however, his goal is narrower, though just as significant: “To marshal the resources of Yale and all its expertise in machine learning and image processing to improve our ability to diagnose prostate cancer.”

Originally published March 31, 2020; updated May 10, 2022.

Submitted by Angel Machon on March 31, 2020