Welcome to the fifth of an ongoing series of roundtable discussions among Chartis consulting leaders around the emerging reality of artificial intelligence (AI) in healthcare.
Recognizing the need for carefully constructed technology and governance to ensure ethical and responsible use, AI offers great value for certain healthcare use cases. Personalizing the patient experience for differentiated interactions, stronger engagement, and better health outcomes is a particularly promising area to thoughtfully leverage healthcare AI. But organizations will need to navigate the risks.
Join Tom Kiesau, Chartis Chief Innovation Officer and Head of Chartis Digital; Kevin Phillips, Co-Founder and Chief Operating Officer of Jarrard Inc., a Chartis Company; Jody Cervenak, Chartis Informatics and Technology Practice Leader; and Jon Freedman, Partner in Chartis Digital, as they discuss AI, what Chartis is seeing in real time, and what they think is coming next.
Tom Kiesau: Thanks for joining the discussion again, everyone. We’ve covered a lot about planning for AI, generally speaking, but let’s drill down to some practical applications and use cases. Patient engagement use cases are an area in which we’re seeing a lot of effective AI applications. Why are these use cases a good starting point for healthcare organizations early in their AI journey?
JON FREEDMAN:
There are lots of strategic benefits for patient engagement. AI tools can build a stronger, stickier patient relationship over time. Healthcare organizations are strapped for time and people, and it’s difficult to engage patients as frequently and smartly as they’d like.
AI tools can accelerate the pace of communication, as well as the quality and consistency of the messages themselves. AI tools can also help with tailoring engagement, helping the patient feel like you know them and their preferences—including the nuances of how they want their care delivered, their language preference, and even their reading and health literacy level. This tailored engagement can enable patients to better adhere to their care plans and medications, and help health systems more proactively reach out to patients to address their care needs. It can empower the patient to be an active part of their care team and participate in the dialogue.
JODY CERVENAK:
Another opportunity is the patient intake process. It has been grounded in traditional rules and guidelines that require asking patients questions over and over again. But so much information doesn’t need to be re-asked every time the patient enters a health system. Re-asking the questions can be a frustrating waste of time for both patients and the care team.
AI tools can help validate the patient history and current health state so the patient can have a meaningful discussion with their care team about things that might have changed since the last time, and the care team can ask targeted questions to update or confirm existing information. Having an up-to-date patient history also communicates an important message to the patient: We care about you, we care about your time, and we want to make sure that we’re talking to you about relevant things.
A related opportunity is for AI to facilitate the critical role of genetics and family history, information that may already be stored in their system. AI tools could assemble that family history and flag associated risks, easing the burden for patients who struggle to recollect everything.
Tom: We’ve touched on 3 discrete subdomains of AI that are helpful to think about in the context of patient engagement: (1) Machine learning capabilities and algorithms to identify what (and when) providers should be asking patients questions, such as when their address or insurance might have changed. (2) Large language model use for creating messages that include the right content, communicated in a more empathetic and thoughtful way. (3) A natural language processing (NLP) opportunity for extracting important points from what the patient says in the dialogue, both during live interactions and submitted via unstructured text.
A common theme across these subdomains of technology is the central ongoing role of humans in the interaction. How do the human and technology components combine for optimal patient engagement?
KEVIN PHILLIPS:
A key to patient engagement is the key to communications in general: It’s about trust and transparency. There’s trust in the message and in the messenger, and then there’s trust and transparency in the information.
Doctors are using AI to communicate with greater empathy and to better balance clinical advice with compassion. They’re even using it to help them better deliver bad medical news. Doctors can use AI to translate complex jargon and concepts into a message that’s easy to understand. Used with an empathy angle, AI can really help in patient engagement.
Patients want to know that AI is used as a complement and not as a replacement for the clinician they have a relationship with.
JON:
I think of AI for patient engagement in 3 different ways. There’s AI that’s strictly behind the scenes, such as algorithms used to deploy predefined messages to promote preventative care. There’s AI that’s directly interacting with consumers, such as chatbots. And then there’s that middle ground where AI is actively assisting the clinician, such as in simplifying explanations of diagnoses or suggesting messages that the clinician can validate and then send.
In each of these areas, the human decision-maker is still the architect of the communications. As such, ensuring the technology is thoughtfully deployed to align with how they’re doing their work is essential to realizing true value.
Tom: How can healthcare leaders mitigate risks (such as saying the wrong thing or proactively communicating to patients for whom the outreach is inappropriate) and avoid stepping on landmines that are unequivocally strewn across the field?
KEVIN:
First off, the potential risks do span the field—from data privacy and security issues, to built-in or amplified bias, to missed diagnoses and errors. Organizations also need a process in place to constantly evaluate and refine algorithms and associated data to ensure they keep them up to date. With an over-reliance on AI, human error inserted into that technology could quickly turn the organization sideways.
That said, when AI is used with the appropriate guardrails and governance, you can cultivate patient trust with transparency and reassurance, and increase staff productivity. Remind patients that AI is already in use in ways that people often don’t consider as AI (such as scheduling, appointment reminders, and medication management). Assure them that AI doesn’t mean their doctor will be a robot next week. AI is a complementary mechanism (such as double-checking diagnoses and imaging reports, almost like a second opinion), and their human care team will still be primary.
If you are up front about that, it can help ease consumers’ concerns.
JON:
Related to that, healthcare organizations need to think about those 3 different kinds of AI use I mentioned earlier and consider what level of transparency they should have for each. They need to plan for how they will communicate about them to patients, without being obtuse or confusing.
And they need to put appropriate checks on how they’re employing the technology, ensuring the appropriate level of human empathy and involvement. Part of that is defining how the organization will use technology in patient engagement.
One low-risk patient engagement use case is summarizing content to key points. Another potentially powerful use case is leveraging AI tools to help patients understand their health plan benefits and empowering them to take advantage of benefits that are “no cost” (to them), like preventative care. AI can help capture benefits and cost estimates in simplified messages.
JODY:
Patients want to know that they’re getting a better experience and the highest quality of care and outcomes. While we’re not seeing this so much in other industries, the very human nature of healthcare makes it critical for organizations to figure out how they will appropriately highlight their use of AI.
Tom: Given unique organizational complexities and the blurred lines between what is clinical versus nonclinical, AI-specific guidelines will likely need to be bespoke to each organization.
These defined guidelines will need to cover AI use, transparency, and communication. And organizations will need a process for how they respond when instances of AI use slam into those guardrails. They need to know how they will review, advance, and revise guidelines—and how they will communicate when something changes or goes wrong. It’s a complicated communication planning exercise that every health system needs to go through.
What are some of the key considerations healthcare leaders should be thinking about?
JON:
When things are going well, patients are unlikely to care much about how an organization is using AI. But they will care very much the moment things seem amiss or the AI just doesn’t work seamlessly, the way it should. Things can veer off in lots of different unintended directions—and, if not actively managed, they will. Understanding that things will not always go as intended and having a prepared process in place to identify and respond will be important to address those situations when they inevitably arise.
KEVIN:
It’s like when the Tesla autopilot feature doesn’t see the car in front of it and crashes—despite the fact that in aggregate, it has fewer crashes per highway mile driven than unassisted human drivers. Consumers don’t want to be the one for whom the system fails, regardless of how good it is overall. People are fascinated by AI, but fear is involved as people don’t understand how AI functions and hear examples of wrong outputs. Failures will be highlighted, so explicitly considering the impact of those failures is critical.
JODY:
Building off that example, human failures abound, currently—whether in causing car accidents or in engaging patients. Has your organization assessed its current state without AI intervention? How many “crashes” are you having every day that you just don’t track? How many wrong diagnoses, incorrect responses, or (even worse) no responses at all are happening in your organization today?
The healthcare industry has so many methodologies and studies around the effectiveness of new drugs and clinical interventions. It would be wise for healthcare organizations to have a discipline around studying their AI changes—identifying the true pre-AI baseline and objectively measuring effectiveness after implementation.
And while this constant study of where you are now and where you are going will be critical, so will being prepared for when things go wrong. Health systems need to have the appropriate reaction, be prepared to explain what happened, and take responsibility.
Tom: Leveraging AI to elevate the patient experience will require a focus on empathy, integrated human oversight, defined AI guidelines, process transparency, and clear AI-related communications. Thank you all for the discussion today. I look forward to our next AI roundtable.
© 2023 The Chartis Group, LLC. All rights reserved. This content draws on the research and experience of Chartis consultants and other sources. It is for general information purposes only and should not be used as a substitute for consultation with professional advisors. It does not constitute legal advice.