How Much Power Should We Give AI In End-Of-Life Decisions?

How Much Power Should We Give AI In End-Of-Life Decisions?


Could an artificial intelligence algorithm used for end-of-life care decisions predict better than your loved ones whether you’d want doctors to restart your heart if it stops unexpectedly? Or if you have a serious illness, should AI predictions about your overall survival odds be used to prod you to make your wishes clear before there’s a medical emergency?

Ready or not, AI predictions are quietly set to become part of care decisions at the end of life. However, what role they’ll play in relation to human intelligence and values, and whether there can be a “moral” AI that takes those into account, remain wide-open questions.

Two recent studies lay out possibilities. In the first, European researchers used a detailed survey of patient preferences for end-of-life interventions such as cardiopulmonary resuscitation to build three versions of an AI “patient preference predictor.” The “personalized” model analyzed 61 characteristics of each patient, all of whom were at least 50 years old. It accurately predicted the patient’s end-of-life preference 71% of the time, according to an article in NEJM AI. That outperformed not only the “average” surrogate score in the medical literature (68%), but the 59% accuracy among couples predicting their partner’s wishes.

AI Prediction Prods People

A separate, less ethically provocative study involved sharing an AI-generated prediction of a patient’s death risk with that person’s clinician as part of a wider initiative designed to galvanize end-of-life planning. At the 8 BJC HealthCare hospitals in the St. Louis area where the initiative was tested, eligible patients’ end-of-life planning surged, with an upswing in use of palliative care and hospice, according to an article in NEJM Catalyst. The death rate within 30 days of discharge that dropped by a third from what was expected.

Tje key was giving patients “the opportunity to express their end-of-life preferences before crisis situations arise,” the researchers wrote, and then appropriately connecting individuals to palliative care, which can sometimes extend life, and hospice which, contrary to popular perception, can sometimes do the same.

Importantly, neither set of researchers asked a chatbot’s advice on pulling the plug on Grandma. Well aware of the shortcomings of so-called “large language models,” including their sometimes making up data, both groups utilized sophisticated machine learning techniques applied to data from actual individuals.

The Human Role

The European team pointedly cautioned against relying solely on the algorithm, while implicitly acknowledging this might not always be possible. “Wherever available,” they wrote, “human surrogates should be kept in the loop to provide additional insights into patients’ values and preferences.” AI and human decision-makers should be seen as “fellow workers in a process of co-reasoning,” they added, noting that clinicians and family needed to counterbalance the “perceived authority” of an AI prediction with their own insights. The Europeans even suggested an advance care planning process that involved patients and surrogates incorporating those insights into the training of a personalized AI algorithm.

The St. Louis group emphasized human involvement throughout the process. For instance, they began by acknowledging that clinicians not only could have difficulty identifying which patients had an “elevated risk of mortality,” but even having done so could also feel uncomfortable initiating a conversation about end-of-life decisions such as an advance directive and other aspects of advance care planning. In response, the researchers trained more than 300 clinicians on “goals of care discussions,” including both information on what to do and why and “experiential learning components.”

That training and the timing of the AI predictions distinguished the St. Louis effort from a 1990s attempt to use a predictive algorithm as a prompt for end-of-life discussions with patients already hospitalized in intensive care. The APACHE III system’s predictions were highly accurate, and the information was helpful for some patients and families facing a difficult reality; e.g., when I interviewed a husband and wife at an ICU in Wisconsin back then, they spoke to me about their positive experience discussing what lay ahead. But many patients and families, already under stress, were shocked to be confronted with uncomfortable news (from clinicians with no special training) about their near-term survival odds. Overall, end-of-life decisions did not improve.

In St. Louis, meanwhile, alerts generated by an algorithmic mortality score were not sent directly to the patient’s doctor. Instead, an alert was first reviewed by another clinician who then forwarded it if appropriate – all part of a deliberate effort to avoid “alert fatigue.”

“The review and email process required about 2-3 minutes per patient,” the researchers wrote. They also tried to ensure there were enough palliative care and hospice specialists to handle increased referrals and that bureaucratic barriers were eased.

Over four years, the St. Louis group identified nearly 14,000 patients as candidates for goals of care discussions. The researchers are now examining the economic impact; for example, the financial implications of possibly reduced length-of-stay and less use of the intensive care unit and emergency room, said Jessica Londeree Saleska, the lead AI implementation researcher in the Department of Internal Medicine at the Washington University School of Medicine, in an email. Saleska is also chief product officer for Central Health Intelligence, an AI start-up working in end-of-life care that has received support from the university.

Real-World Worries

As AI use grows in all aspects of medicine, so do worries about overestimating its capabilities. That’s particularly true in the highly sensitive area of end-of-life decisions, where a commentary on the European research findings expressed deep concern that the results might be interpreted as evidence that AI can replace human surrogates and the doctors who guide them “through the complex decisions that must be faced.”

“Preferences change over time in ways that cannot be predicted,” emphasized the commentary’s two authors, a pediatric intensivist and a palliative care specialist. The commentary did not, however, address what happens when a patient has no surrogate or surrogates disagree.

Nor did the NEJM AI commentary or either set of researchers address the very real possibility that some patients and families will copy the medical record into one or more chatbots and emerge with a survival prognosis that might not match that of the hospital’s AI.

A Moral AI Surrogate

Meanwhile, in an intriguing paper published on the arXiv site, which is not peer-reviewed, a University of Washington researcher suggested developing an AI surrogate whose predictive accuracy also encompassed the “moral adequacy of representation;” i.e., ”the degree to which an AI system honors the patient’s values, relationships, and cultural worldview.” That AI would still be a decision aid, not a decision maker, and its reasoning would be transparent, wrote Muhammad Aurangzeb Ahmad, a resident fellow at UW Medicine who works with UW’s Harborview Medical Center and the university’s Department of Computing and Software Systems.

In an interview with the online publication ars Technica, Ahmad spoke of trying to develop such a surrogate and said he is in the “conceptual phase” of testing the accuracy of that type of AI model based on Harborview patient data. A spokeswoman for UW Medicine told the publication that no patient has yet interacted with the model and emphasized that there was “considerable work to complete,” including a multiple-stage review process, before additional research with an AI surrogate would be approved.

In a 2024 Forbes.com article, Tal Tova Patalon, a physician who often deals with issues of health, ethics and spirituality, discussed the challenge of building a moral AI model that was both accurate in its individual predictions and explainable in its reasoning.

Patalon urged individuals to avoid possible dependence on AI in end-of-life care decisions by taking the initiative in advance care planning .

“Claiming our personal choice means we will never need a personalized algorithm,” she wrote.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *