Guest Blog: Navigating the AI Revolution: Patient-Centered AI Use in Health Care

By: Allison Isaacson, MPH and Rachel Dungan, MSSP, AcademyHealth  

Artificial intelligence (AI) holds immense potential for enhancing and modernizing health care, with many use cases from interpreting x-ray results to personalizing treatment regimens. However, patients have different levels of trust and comfort with the use of AI technologies in their care. Just as providers can improve patient trust by tailoring communications to patients’ language preferences or health literacy levels, customized approaches to talking about AI use can help relieve worry and enhance their outcomes. 

AI broadly includes technologies that enable computers or other machines to perform tasks in ways that imitate human intelligence such as reasoning, problem-solving, and decision-making. Modeled after the complex ways our brains source, organize, integrate and understand massive amounts of information, AI tools and systems use various algorithms and computational techniques to process large volumes of data, extract patterns, and make predictions or decisions based on patterns. With AI, a radiologist could theoretically compare a breast cancer scan with every other scan they had ever examined—or every other scan documented or published. One study of breast cancer diagnoses found that AI-assisted diagnoses were both more likely to be accurate and made more quickly and efficiently than those determined by radiologists alone.  

While there are seemingly endless possibilities for how AI can improve care, there are barriers to implementing AI in patient-centered ways. It can be hard to describe these technologies without using technical jargon. Bias in algorithms used to design AI systems or bias in the data entered into the AI system may also reduce AI’s effectiveness and real-world applicability. Lastly, because AI technologies adapt and improve in real-time, even patients who have an accurate understanding of how they work may have trouble keeping pace with new developments.  

A project focused on patient trust and clinical-decision support found that, while patient views on AI in health care largely differed based on levels of comfort with technology and personal preferences, the majority believed AI has the potential to strengthen and streamline health care. In small group discussions with patient and caregiver advocates, most participants expressed belief that they had encountered AI in their health care experiences, but were unable to recall specific instances. Additionally, none of the participants were ever directly informed of AI tools used in their care.  

So, what is the best way to talk about AI in health care with patients? This depends on several factors including the trust between a patient and provider, the patient’s values, and consideration for whether certain information will positively or negatively impact outcomes.  

Trust and Transparency Between Patients and Providers. In the small group discussion mentioned above, patients stated they would be more likely to accept and trust AI tools used in their care if these were introduced by a trusted clinician or health system. Growing public knowledge about the concerns and caveats of AI use makes it even more important for patients to understand why their clinician is comfortable, or even enthusiastic, about using these technologies to support their care. This can be challenging given lack of transparency and provider understanding about the algorithms or processing mechanisms driving these systems, which might seem to produce results in a “black box” of sorts. Additionally, if providers demonstrate their own lack of AI-related knowledge or trust in the technologies, patients may feel confused or uneasy.  

Weighing Patient Values and Informational Needs. Taking patients’ values into account can help providers effectively integrate AI use into health care and clinical decision-making. One article on patient values related to AI in oncology care suggests that patients prefer AI be used to assist providers, rather than replace them. Patients continued to view human interaction and empathy as the cornerstone of their care. Many patients also value transparency and may appreciate an explanation of the clinical data that informed the AI, and also of the ways their own data may be used to help train or test new AI models. An explanation of the risks, benefits, and potential alternatives will ensure a patient’s care aligns with their priorities, such as independence and autonomy.  

Considering how AI use gets communicated to patients. AI use in radiology has increased dramatically over the last decade: in 2020, 33% of radiologists reported using AI and another 20% expected they would likely begin using it within five years. AI can help radiologists arrive at an accurate diagnosis and create a helpful treatment plan. However, how we communicate AI use to patients can directly impact both clinical decisions and outcomes. Providers must commit to curating the results and integrating their own clinical expertise. A 2023 study tracking radiologists’ tendencies to agree or disagree with AI-generated results, based on whether patients knew AI was used, found that radiologists were less likely to disagree with AI-generated results if patients were aware of AI use. Even if these results were inaccurate, providers tended to agree with the diagnosis so that patients wouldn’t see a discrepancy between the AI result and the provider’s own opinion. In these cases, radiologists might have arrived at the right diagnosis sooner if they had not used an AI-supported system. 

AI in health care is here to stay. As such, patients need relevant information about AI used in their care delivered in a constructive and accessible way. The patient’s relationship with their provider, their core values, and how AI affects their outcomes must all be considered when approaching conversations about AI use in patient care. 

Accessibility Tools