Welcome to the seventh of an ongoing series of roundtable discussions among Chartis consulting leaders around the emerging reality of artificial intelligence (AI) in healthcare.

In the face of mounting workforce and care capacity challenges, healthcare organizations can leverage AI tools to better allocate resources, alleviate unnecessary burdens, and increase capacity for patient care and interactions.

Join Tom Kiesau, Chartis Chief Innovation Officer and Digital Transformation Leader; Julie Massey MD, Principal, Clinical Technology Innovation;  Jon Freedman, Principal, Digital Transformation; and Bret Anderson, Principal, Digital Transformation, as they discuss AI, what Chartis is seeing in real time, and what they think is coming next.


Tom Kiesau: Thanks for being a part of today’s roundtable. In our previous roundtables, we talked a lot about the administrative and financial applications of AI. Let’s start today on the clinical side. What are some especially valuable AI use cases for deploying care team resources efficiently and effectively? 

JULIE MASSEY, MD: 

A few areas stand out as opportunities for leveraging AI to synthesize large volumes of material. Medical records synthesis and clinical documentation are particularly ripe opportunities. 

We can ease the cognitive burden for physicians with lower-risk applications that surface and visualize relevant patient insights out of the tomes of historical patient data—like discharge summaries, longitudinal records, and previous clinical notes. Of course, the underlying data first must be validated and trusted. It’s also essential to put a structured monitoring and review process in place because the perceived risk of missing something generates a lot of clinician angst.

Similarly, it’s striking how much time clinicians, especially nurses, spend on repetitive documentation that could be effectively augmented or even offloaded. We frequently hear from clinicians how much time is not patient-facing. AI applications could reduce the clinician burden and help them tell the patient’s clinical story, rather than spending their time as data-entry clerks.

Another area of opportunity is reducing the sheer amount of manual effort that goes into staffing plans. Organizations can unburden clinical managers by taking staffing plans out of manual spreadsheets and matching staffing to projected demand in both the acute and ambulatory settings.

Finally, AI can make clinician training more efficient—taking real data and learnings and applying it to training for advanced competencies. We’re not thinking about how we train AI to be doctors but how we train doctors to effectively utilize AI.

Tom: How can AI applications help with increasing efficiency, quality, and experience for nonclinical staff? 

JON FREEDMAN: 

The service center, in particular, has several near-term use cases. As Julie noted on the clinical side, opportunities exist for training nonclinical staff at every point of their professional life cycle, starting with onboarding.

For instance, health systems can glean learnings from successful, experienced agents in the service center and use those for rapidly training less experienced agents in best practices, reducing the need for intensive 1:1 training.

Health systems can also use AI applications to generate high-quality call summaries from live voice interactions. This can create unique structured data to include in the enterprise customer relationship management (CRM) platform. Similar to how ambient listening in the exam room can ease the data entry burden for clinicians, better understanding service center interactions makes it easier for agents to be efficient and accurate. It also enables system leaders to pursue the highest-value improvement opportunities.

In addition, real-time work aids can improve the patient experience and reduce unnecessary involvement of other staff. Proactively generating real-time prompts for the agent—based on what the health system knows about the patient (e.g., their history with the health system, what they are sharing on the call)—can guide the agent to ask the right questions and take the next best action. Even cutting just a few seconds off each phone call can add up to a lot of savings and increased satisfaction for the staff and patients.

BRET ANDERSON: 

Taking that a step further, health systems can sift through and aggregate why patients are calling in a much more robust way. Doing so can inform opportunities for service automation, improve patient and staff satisfaction, reduce call wait times, and decrease cost. Current automation is largely driven by hypotheses with limited data. But AI could sort through large volumes of data to help develop interaction pathways that help get the patient connected to the right resources more quickly.

Tom: These are great points and highlight how AI used for administrative purposes can also make a significant impact on the clinical side. 

So many consumer interactions live in the ether of “calls.” And a majority are going to unstructured endpoints, like physicians’ offices. This means a huge share of the clinical workforce is doing manual administrative work that health system leaders have no insight into. It’s just a black box. There’s a lot of waste obfuscated by the lack of data on very high-volume activities.

AI can listen through voice interactions and leverage voice-to-text natural language processing (NLP), not as Big Brother, but to learn and categorize. Who answered the call? Were they the right person? Was the patient’s reason for calling addressed? These tools can help inform why patients are calling. They also help predict when patients are going to call, where they’ll call, and whether or not they’ll get the answers they’re seeking. The organization can then make sure calls are going to the right places and being served by the right resources.

BRET: 

Another application of this data is better understanding patient demand. Even beyond predictable demands like flu spikes and holiday-related build-up in the Emergency Department, organizations can identify when and how patients will seek care, then proactively deploy tools and resources to serve that demand.

JON: 

In the not-too-distant future, organizations will identify patient demand trends in real-time by combining not only live service center data but also electronic health record (EHR) data, community or population trends, and other data channels (such as current consumer search information).

Tom: What are some of the potential people-related risks, and how can leaders mitigate them?

JULIE: 

Trust is a huge factor for all stakeholders. The data and outputs have to be reliable and trusted. For that to happen, the organization needs to not only leverage the tool itself but also define the workflows around it. Define what you want people to do with the output of the tool and build the systems around it to enable their efforts. For instance, if clinicians are using diagnostic screening tools, they need to trust the findings and know what to do with the results, not just duplicate effort with a redundant exam or manual verification of the result.

JON: 

Similarly, if you’re making clinical and nonclinical staff more efficient, don’t just immediately backfill the created capacity with more (and likely more complicated) work. For instance, there’s a temptation to pile on more visits for clinicians, rather than enhancing the quality of visits. 

Be careful about striking the right balance of higher productivity without creating unsustainable workloads. Similarly, make sure you’re not making one area more efficient only to shift the bottleneck to other people downstream.

BRET: 

The key is treating AI as an engagement tool. It shouldn’t be something you do to your staff but something you do with your team. The same digital solution can be applied different ways, in different organizations, with very different outcomes. 

Tom: Let’s talk about risks and mitigations leaders should consider on the more operational side.

Another big risk is not considering the operational model changes that need to be made along with the tools. Tools won’t achieve anything on their own. You need to put them in people’s hands in a way they can use and collaborate with them.

JON: 

Privacy and security need to be top of mind. For instance, how do you implement a passive listening bot that’s recording and aggregating sensitive patient information in a secure and compliant way? One important element will be transparency. In addition to legal considerations, you’ll have to address consumer, patient, and staff comfort level and engagement. And you’ll need to ensure appropriate safeguards are in place for how (and even if) the underlying data is used.

The healthcare industry should also seek to learn from other industries and work with them on solutions, rather than working on all of this on their own. For instance, the education industry could inform how health systems leverage AI for workforce training that is efficient, effective, and professionally satisfying. While healthcare is unique, there are worthwhile learnings we can build on from other industries.

Tom: Thanks for your insights, everyone. The big takeaway is: With careful planning and collaboration, AI tools can better engage and deploy your workforce. By focusing their time on higher-quality activity, you can create capacity and reduce cost, while improving the experience for your workforce and patients alike. 
 

© 2024 The Chartis Group, LLC. All rights reserved. This content draws on the research and experience of Chartis consultants and other sources. It is for general information purposes only and should not be used as a substitute for consultation with professional advisors. It does not constitute legal advice.

Related Insights

Contact us

Get in touch

Let us know how we can help you advance healthcare.

Contact Our Team
About Us

About Chartis

We help clients navigate the future of care delivery.

About Us