Skip to Main Content

Hard choices: AI in health care

Yale Medicine Magazine, 2021 Issue 166

Contents

Artificial intelligence will change the health care industry, not least by raising serious moral issues.

“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should,” said Jeff Goldblum in his role as mathematician Ian Malcolm in the movie Jurassic Park. Recent advances in artificial intelligence (or AI as it’s colloquially known) have some scientists posing that question at the same time that they see it deployed within their departments.

From assisting medical procedures and diagnostics to analyzing patterns in research data, AI enhances medical science in many ways. Wendell Wallach, scholar and chair emeritus of technology and ethics studies at the Yale Interdisciplinary Center for Bioethics (ICB), said that many aspects of current health care already incorporate AI, but that there may be serious consequences if it isn’t designed and used properly. As AI systems become more advanced and widely used, health care professionals, organizations, and patients will face progressively more complex moral questions.

“Almost every aspect of the AI design process and in many cases aspects of its actual usage have flaws that generate ethical problems,” said Nisheeth Vishnoi, PhD, the A. Bartlett Giamatti Professor of Computer Science and co-founder of Yale’s Computation and Society Initiative.

Two of the most pressing current ethical considerations involve the potential loss of physician autonomy and the unconscious amplification of underlying biases. “AI is already helping doctors make decisions,” said Olya Kudina, PhD, assistant professor in the ethics of technology at Delft University of Technology in the Netherlands, who teaches a summer course at ICB on the philosophy and ethics of technology. “It’s important to take key ethical issues into consideration now.”

AI: health care assistant or tyrant?

Ideally, AI should benefit both practitioners and patients. Time-consuming tasks like record-keeping, prescription writing, or even such tedious elements of medical research as examining large datasets to detect patterns could free up time for other tasks and allow clinicians more time to spend with patients. “It might seem like a paradox,” said Daniel Tigard, PhD, “but we’re hoping to give health care more of a human touch by using tech to allow doctors more time to spend with humans.”

In theory, AI may eventually advance to the point of effectively replacing human doctors. But this fundamental shift could have deep existential implications for physicians’ authority and autonomy, and ultimately their liability.

Tigard, a senior research associate at the Technical University of Munich's Institute for History & Ethics of Medicine in Germany who taught a summer course at Yale’s ICB on moral distress in medicine, explains that American society shifted only recently from the model of ‘doctors know best’ to ‘patients know best.’ “Now we’re seeing a new shift,” he said, “where machines seem to ‘know’ best.” And as medicine devolves into a world driven by AI, much of a doctor’s autonomy or decision making responsibilities may be transferred to AI systems, said Joseph Carvalko, BSEE, JD, chair of the Technology and Ethics Working Research Group at Yale’s ICB.

Carvalko thinks that doctors will increasingly be pressured to hand over authority to AI technologies and potentially become legally accountable for overriding the decision of a machine using the latest technology. Wallach echoes this concern, saying that under these conditions, doctors may not feel comfortable going against an algorithm’s decision.

“Accountability, responsibility, culpability, and liability will get increasingly complicated,” said Wallach—especially in such a litigious country as the United States, where lawyers will have audit trails and other evidence proving what the machine proposed versus what the doctor did. Most experts think that this issue will almost inevitably fuel a proliferation of legal battles. But the question of whether it’s ethically just to hold humans accountable for AI decisions and their implications is far from clear.

Tigard explains that traditional legal liability depends on moral autonomy—a condition that relies on such factors that doctors normally possess as knowledge and control. But it’s not always clear how AI systems make decisions. “Before, a doctor could see when their scalpel needed to be sharpened,” Wallach said, “Now for all intentional purposes doctors are working off of an invisible scaffold with invisible tools.”

So, if doctors can’t be held accountable, should accountability shift to the AI system? Many experts say no, because AI lacks the human qualities needed to make moral decisions that depend on empathy and semantic understanding. But just because AI systems can’t currently be held liable for their decisions, that shouldn’t stop efforts to incorporate some form of moral responsibility into AI systems. Tigard emphasized that AI is designed to evolve. “We’re talking about things made to learn,” said Tigard. “Let’s teach.”

Those who make or deploy AI systems can be held legally liable according to Wallach, though how that liability is monitored and enforced has yet to be fully explored. Another liability issue is that AI systems are always changing and evolving, creating unpredictable new risks; and in some cases, generating their own meaning, as some AI systems are themselves generating new AI. When an AI system creates a new and independent system, what—or who—is then to be held accountable?

Brian Scassellati, PhD, the A. Bartlett Giamatti Professor of Computer Science & Mechanical Engineering & Materials Science at Yale, works with in-home socially assistive robots that encourage children with autism spectrum disorders to do therapeutic exercises. Scassellati said these robots adapt to each child’s strengths and weaknesses to become more effective, so they’re constantly changing within a certain range. “This means I don’t know exactly what [the robot] will do at any given time,” he said, “which is a scary thing as a researcher.”

Unconsciously biased results

Bias is another core ethical dilemma presented by AI medical technologies that was highlighted by all interviewed. Kudina said she thinks that when it comes to the use of AI in medicine, a combination of human and machine bias is a tricky ethical problem.

“While people are frequently steered by unconscious preferences, AI also highlights certain areas of attention, making others less visible,” she said. “Depending on which dataset AI is trained on, it will inevitably become targeted at one population and by default will discriminate against other populations.”

Because AI medical technologies influence elements of patient care and doctors’ decision making processes, these biases could render the technology useless—or more trouble than it’s ethically or legally worth—in many scenarios by skewing data in such a way that makes the technology inappropriate for use with large segments of society. And by all accounts, AI bias will be extremely difficult if not impossible to detect and root out, given that bias is often inherent, systematic, and invisible.

Vishnoi explains that AI algorithms consist of design choices that often ignore such factors about the data being used as who is using it and how it’s gathered and presented, creating software biases reflecting or amplifying those of the people who create them and the data used to train the system.

Many AI systems also contain biases reflecting the society in which their creators were educated, thus incorporating cultural biases that make it difficult to use the same software appropriately in different geographical locations.

Kudina said she lives and works in the Netherlands, where doctors have refused to adopt AI systems created and tested abroad—like Watson, a system developed by IBM in 2011 that was first used by Memorial Sloan Kettering Cancer Center in 2013. Kudina discussed the case of the STAT News investigation that discovered that Watson was trained in a specific population and on ways of treating cancer typical of the United States. As such, Dutch doctors simply don’t see Watson as an appropriate tool for their culture of cancer treatment.

“They didn’t see their patients or typical treatment patterns represented in the technology,” she said. “Right now, many AI technologies work within a very narrow restricted viewpoint that tends to overlook cultural and societal assumptions, expectations, and truths. This needs to change for it to be relevant for use in more than just one setting.”

Experts say we can reduce some of these biases by using more diverse datasets to train AI systems but we’re unlikely to eliminate it completely, given how prone humans and most medical datasets are to bias. “Society is built around reinforcing its own structure,” said Vishnoi. “It’s not surprising that datasets pick up on these reinforcing factors.”

And many interviewed, like Wallach, conclude that for AI medical technologies to advance, either positively or negatively, the best way forward may be to merely acknowledge that AI systems will have biases and try to accommodate to or adjust for them as best we can. “We may have to simply accept that we can’t create fully unbiased systems and move on,” Wallach said. “We can’t change human nature.”

Still looking for answers

While intellectuals, researchers, developers, and health care organizations are racing to develop solutions, society currently lacks the ability to address most of the ethical problems posed by AI. Scassellati said that scientific communities and most industries typically have ethical standards and a system to enforce them, but that AI is too young and ever-changing to have yet formulated and imposed defined standards.

“It’s the Wild West frontier of the research world,” said Scassellati. “We’re running into problems we’ve never seen or even considered and stumbling across new ethical questions almost every day that we often have no way to solve.”

One solution is to revise medicine’s codes of ethics to include AI considerations, most importantly the value of explicability. “A doctor needs to have the means of knowing how a decision was made so they know how much to trust the machine or their instinct to go against it,” Kudina said, adding that informed consent forms must also be altered to educate patients and doctors about risks, biases, and other difficult-to-predict factors that influence how much AI shapes doctors’ decisions.

Wallach said a new set of AI ethical codes may be easier to create and deploy in medicine because medicine has more clear, concrete, and universalized principles and goals than many fields or industries. But with such diverse stakeholders and the rapid rate of technological change, just keeping up with advances in AI is almost impossible, let alone formulating a set of binding ethical guidelines.

Vishnoi and others say the public and health care entities must also demand improvements in AI fairness. He said that while research and conversations at Yale help inform agencies like the United Nations Human Rights Commission about AI ethical concerns, sometimes it isn’t even clear which questions should be put to companies to protect the health care industry and the wider society.

“Today our faces, fingerprints, even heartbeats are being digitalized,” Kudina said, “Who knows how far society and the health care industry will let this trend go? But regardless, the possible implications to medicine are endless, and what it means to be a doctor and patient may fundamentally change.”

Previous Article
Teaching Medicine to Machines: Using AI and Machine Learning to Improve Health Care
Next Article
How COVID-19 is transforming the ways we work