Artificial intelligence (AI) has begun to transition from hype to substance over the last year. AI can rapidly generate insights from an increasing breadth and depth of data sources. But there are also associated risks, including the potential to exacerbate inequities experienced among communities that are under-resourced or have been historically marginalized.

The challenge now facing US health systems is how to best harness AI’s transformative potential without creating unintended, adverse consequences for their patient populations. 

Establishing what’s at stake: Three crucial questions

AI relies on data and advanced mathematical models to answer a question or complete a task. That means there is inherent risk that the data or model may be biased or unrepresentative of the population it addresses. Because AI and the drivers of inequities often operate in ways unseen, the intersection of the two is an especially dangerous space if not sufficiently scrutinized. Health system leaders can begin by focusing on these three questions: 

  1. Does the data we are using accurately represent the community we intend to serve? Health systems need to plan how they will avoid using AI built on biased data. Studies have highlighted how bias (often present in medical and demographic data and the algorithms and models that use this data to inform diagnoses and treatments) frequently results in inequitable and unfavorable outcomes for historically under-represented populations.1 One recent study found that a commonly used algorithm in the US was reducing the number of Black patients identified as needing clinically necessary extra care by more than half.2   

    A comprehensive and continuous approach to reviewing the underlying AI datasets for potential bias that would perpetuate inequities is a table-stakes capability that all health systems need to adopt as part of their AI strategy. Moreover, the recent executive order from President Biden explicitly cited health inequities as a risk that regulatory agencies will increasingly scrutinize in healthcare AI applications.3   
     
  2. Are we ensuring our AI applications will benefit all patient cohorts within our community without negatively impacting specific populations? When health systems develop their AI use cases, they need to consider the solutions in the context of all the populations served within the health system.   

    The focus should be to ensure that AI initiatives do not widen the divide in service and outcomes between “attractive” patient populations (e.g., patients with higher-reimbursing health plans, which are generally commercial) and those often deemed “less desirable” (e.g., lower-reimbursing populations, such as Medicare and Medicaid beneficiaries and self-pay patients).   

    In focusing on the biggest opportunities for AI applications, populations that aren’t considered a priority can easily be negatively impacted when it comes to designing models and assessing results on target populations. Without considering all populations, the result will most likely be worse outcomes for patients in “less attractive” populations.4 
     
  3. Will our AI pursuits impact the make-up of our workforce? The focus should be on how the health system upskills existing employees and enables entry points and training for people from the local community to hold positions within the organization.   

    For roles that will not be eliminated, it’s essential to proactively communicate with employees about the intent and positioning of AI as an augmentation of their jobs, rather than a replacement.
     

Four steps to answering these AI questions

Potential adverse impacts of AI on under-resourced communities are avoidable. Health systems can work to not only minimize these risks as they target select populations but also use their AI tools to proactively advance efforts that can improve health equity. These steps are essential: 

  1. Establish enterprise AI governance with health equity leaders at the table. Discrete enterprise AI governance is a best practice for any health system focused on materially deploying AI.   

    Governing bodies should include internal and external leaders who specifically work with under-resourced communities so they can help establish additional guardrails (such as data quality, diversity, and representation checklists) and ensure AI tools are fueled by equitably representative data and unbiased algorithms that seek to identify and ameliorate—rather than perpetuate—health disparities.5  
     
  2. Develop enterprise AI use guidelines, keeping the impacts on under-resourced populations top of mind. Leveraging AI appropriately can become a major strategic differentiator for health systems that get it right. But it can have substantive consequences for organizations that get it wrong.   

    A critical requirement to realize the benefits—while minimizing the risk of unintended consequences—is having sufficient AI use guidelines in place. As the health system defines these guidelines, it’s important to consider not just the general rules the organization will follow (e.g., no direct clinical information is provided to patients without a human intermediary). Health systems should also consider the rules they will apply to systematically evaluate and monitor the impacts on under-resourced populations.
     
  3. Concurrently evaluate the upside opportunities and downside risks with each AI use case. Systematic and thoughtful assessment and validation of use cases are perhaps the most consequential steps a health system should take to identify efforts with the greatest strategic value. The health system also should explicitly understand and consistently quantify the possible risks of adopting them. Pursuing these activities through a health equity lens will require several key considerations: 
    • Prioritize opportunities for using AI tools to improve care and outcomes where disparities currently exist.
    • Carefully assess AI outputs to determine whether and how discrete populations are represented and impacted.
    • Hedge against AI data set limitations and bias by casting a wider analytics net beyond your health system’s own data, including community health-related information.
       
  4. Deploy a proactive change management approach to fostering trust and driving adoption. While an abundance of excitement surrounds recent AI breakthroughs, that enthusiasm is often matched by anxieties about AI’s potential impact on both patients and the health system workforce.   

    The prospect of technology replacing human workers is a real and understandable fear for many, particularly employees in administrative and operational support roles who are commonly the focus of current AI applications. These positions also tend to be disproportionately filled by employees of color.6 Therefore, it’s important for health system leaders to acknowledge and proactively address those concerns. 

    Among the many steps leadership should take to assuage these anxieties include:  

    • Explicitly assess near- and long-term AI use case effects on employment and develop active mitigation tactics for displaced workers, such as job placement and training.  
    • Actively communicate with staff and the community about how AI is being deployed and the safeguards against unintended consequences.  
Applying ai to workforce engagement and equity

Innovative new approaches are emerging in how to apply AI to workforce engagement. In fact, nearly half of health system executives say that they are employing AI for workforce solutions. Organizations are increasingly using AI to personalize career pathways with customized trainings, foster team and affinity group connections, tailor benefit packages, and track individual employees’ risks for burnout

In healthcare organizations, health inequities are often discussed in the context of patients. However, health inequities and social drivers of health also impact healthcare workers and can negatively impact their attendance rate, job performance, burnout, and general well-being.  

A 2022 study published in the Journal of Primary Care & Community Health found that certain segments of health systems’ workforces were more likely to be negatively impacted by social drivers of health. For instance, the rate of food insecurity was 38% among clinical support workers (e.g., clinical technicians and medical assistants) and 30% for other support workers (e.g., food service and environmental service workers). In comparison, the rate for staff physicians and nurses was 5% and 1%, respectively.  

The rate of financial strain was 32% for clinical support workers, 33% for other support workers, 8% for nurses, and 0% for staff physicians. Other social drivers the study found significantly impact certain workforce segments include inability to pay for housing, domestic partner violence, social isolation, and lack of internet access.  

Hospitals and health systems typically rely on annual employee surveys to understand what workers are struggling with and identify areas of opportunity to better support employees. However, an AI-powered algorithm could be used to analyze the host of available employee data (and additional informative data points the organization could add) to identify needs in real time. For example, if a certain cadre of workers from a particular geography has a much higher absentee rate, it may be that they have poor transportation options. It might be worth providing a shuttle service to assist them and reduce absenteeism.  

Considering how to use AI and related technology to reduce burnout and more proactively meet the workforce’s specific needs is one way to build organizational muscle and momentum around applying AI. 

Channel AI’s transformative potential

AI has the potential to be a tremendous force for good in healthcare. AI tools can help identify and enable opportunities to improve health equity in ways previously unimaginable. But without diligent oversight and careful scrutiny, those tools can easily perpetuate, increase, and create inequities—turning an organization’s well-intended efforts into a negative impact on its community.  

By employing some of these steps as part of the health system’s approach to AI deployment, health system leaders can make sure their organizations are establishing the necessary guardrails, collaborations, and strategies for AI use. The result will be not only driving the potential benefit to the health system’s future performance but also ensuring the transformative potential advances the health system’s mission toward a healthier future for all the communities it serves.   


Sources

1 Ryan Levi and Dan Gorenstein, “AI in Medicine Needs to Be Carefully Deployed to Counter Bias—and Not Entrench It,” NPR, June 6, 2023, https://www.npr.org/sections/health-shots/2023/06/06/1180314219/artificial-intelligence-racial-bias-health-care; Sara Khor et al, “Racial and Ethnic Bias in Risk Prediction Models for Colorectal Cancer Recurrence When Race and Ethnicity Are Omitted as Predictors,” JAMA, June 15, 2023, https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2806099.  

2 Ziad Obermeyer et al, “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,” Science, October 2019, https://www.science.org/doi/10.1126/science.aax2342.  

3 “President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

4 For instance, when only images of white patients are used to train AI algorithms to spot melanoma, the result can be worse outcomes for people of color. “AI Could Worsen Health Inequities for UK’s Minority Ethnic Groups—New Report,” Imperial College London, https://www.imperial.ac.uk/news/230413/ai-could-worsen-health-inequities-uks/

5 Carl Thomas Berdahl, et al, “Strategies to Improve the Impact of Artificial Intelligence on Health Equity: Scoping Review,” JMIR AI, July 2, 2023, https://ai.jmir.org/2023/1/e42936

6 Jason Semprini, “Examining Racial Disparities in Unemployment Among Health Care Workers Before, During, and After the COVID-19 Pandemic,” Journal of Patient-Centered Research and Reviews, July 18, 2023, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10358973/

 

© 2023 The Chartis Group, LLC. All rights reserved. This content draws on the research and experience of Chartis consultants and other sources. It is for general information purposes only and should not be used as a substitute for consultation with professional advisors. It does not constitute legal advice.

 

 

Related Insights

Contact us

Get in touch

Let us know how we can help you advance healthcare.

Contact Our Team
About Us

About Chartis

We help clients navigate the future of care delivery.

About Us