Health organizations play a key role in offering access to care, motivating skilled workers and acting as social safety nets in their communities. They, along with life sciences organizations, serve on the front lines of addressing health equity.

With a decade of experience in data and knowledge content, specializing in document processing, artificial intelligence solutions and natural language solutions, I strive to apply my technical and industry experience to the most important issue of diversity, equity and inclusion in healthcare.

Here are five questions I hear frequently in my work:

1. What is the digital divide and how does it affect healthcare users?

There are still too many people in this country who do not have reliable access to computing devices and the Internet in their homes. If we think back to the beginning of the pandemic, we can see this clearly. The number one barrier to moving to virtual school was that kids didn’t have devices or reliable internet at home.

We also saw clearly that segregation had a disproportionate impact on low-income people in disadvantaged neighborhoods.

The problem is both accessibility and access.

The result, through a healthcare lens, is that people without reliable Internet access have less access to information they can use to manage their health.

They are less able to find a doctor who is right for them. Their access to information about their insurance policy and what is covered is more limited. They have less access to telehealth services and meet a provider from home.

All of this comes together because we use digital and Internet-connected tools to improve healthcare and patient outcomes. But at the end of the day, the digital divide means that we achieve marginal benefits for the best-performing populations and no significant benefits for the populations that need support the most.

2. How can organizations maintain an ethical stance while using AI/ML in healthcare?

Focus on inherent biases, the subconscious stereotypes that influence how people make decisions. People have inherent biases taken from their environment that require conscious recognition and attention. Machine learning models also pick up on these outliers. This happens because the models are trained on data about historical human decisions, so human biases show up (and can even be amplified). It is extremely important to understand where the model came from, how it was trained, and why it was created before using it.

Ethical use of AI/ML in healthcare requires careful attention to detail and often human review of machine solutions to build trust.

3. How can HCOs manage inherent data biases? Is it possible to eliminate it?

At this stage, we are working to manage bias, not eliminate it. This is most critical for training machine learning models and correctly interpreting the results. In general, we recommend using appropriate tools to help detect biases in model predictions and use these detections to drive retraining and reprediction.

Here are some of the simplest tools in our arsenal:

  • Reverse the incorrect parameter and try again.
  • Determine whether the model would make a different prediction if the person were white and male.
  • Use this additional data point to advise a person on their decision.

For health care in particular, the person in the cycle is extremely important. There are some cases where membership in a protected class changes prognosis because it acts as a surrogate for a key genetic factor (male or female, white or black). A computer can easily correct bias when reviewing a loan application. However, when assessing heart attack risk, there are specific health factors that can be predicted by race or gender.

4. Why is it important to train data professionals in this field?

Data scientists should be aware of potential problems and omit protected class information from model training sets whenever possible. This is very difficult to do in healthcare because this information can be used to predict outcomes.

The data scientist needs to understand how likely it is to have a problem and be trained to recognize problem patterns. Therefore, it is very important for data scientists to have some understanding of the medical or scientific field for which they are building a model.

They they need to understand the context of the data they use and the predictions they make to find out whether results from protected classes are expected or unexpected.

5: What tools are available to identify biases in AI/ML models and how can an organization choose the right tool?

Tools like IBM OpenScale, Amazon Sagemaker Clarify, Google What-if, and Microsoft Fairlearn are a great starting point in terms of detecting biases in models during training, and some can do so at runtime (including the ability to make corrections or to identify changes in pattern behavior over time). These tools, which enable both outlier detection and model explainability and observability, are critical to bringing AI/ML into live clinical and non-clinical healthcare settings.

Healthcare leaders are turning to us

Perficient is dedicated to enabling organizations to elevate diversity, equity and inclusion within their companies. Ours health practice is comprised of experts who understand the unique challenges facing the industry. The 10 largest health systems and 10 largest health insurers in the US have relied on us to support their end-to-end digital success. Modern Healthcare also recognized us as the fourth largest healthcare technology consulting firm.

We bring pragmatic, strategically grounded know-how to our clients’ initiatives. And our work is getting attention—not just from industry groups that recognize and award our work, but also from the top technology partners who know our teams will reliably deliver complex, game-changing implementations. Most importantly, ours customers demonstrate their trust in us by partnering with us again and again. We are incredibly proud of our 90% repeat business rate because it represents the trust and collaborative culture we work so hard to build every day within our teams and with every client.

With more than 20 years of experience in the healthcare industry, Perficient is a trusted global end-to-end digital consultancy. Contact us to learn how we can help you plan and implement a successful DE&I initiative for your organization.


5 Commonly Asked Questions About Intrinsic Bias in AI/ML Models in Healthcare

Previous articleQualcomm announces the dates of the Snapdragon Summit, where it will officially announce the Snapdragon 8 Gen 2
Next articleThe best AI resources to follow if you are interested in AI | by Avi Chawla | July 2022