Addressing Implicit Bias with Health Tech

by | Mar 15, 2022 | Blog, DEI, Health Tech

Share this article!

Clinicians are calling for health equity to be added as the fifth aim for healthcare improvement. The current quadruple aim has four areas to focus on for improving healthcare: population health, reducing costs, patient experience, and provider satisfaction. While some of the other aims may tangentially impact health equity, making health equity its own aim refocuses efforts and puts more resources and attention towards addressing it. And honestly, how can we truly improve population health and the patient experience if we are not willing to address social determinants of health (SDOH) that impact the frequency and quality of care patients receive? 

One barrier to health equity is unconscious bias. Implicit biases are automatic and unintentional thoughts about a person, idea, or thing that can impact thoughts, decisions, and actions. While clinicians may not recognize this bias, it can still impact the care they provide. One study looked at how pediatricians treat pain for teenagers after surgery, and they found that an increase in provider implicit bias favoring white patients lead to a decreased likelihood of the provider prescribing appropriate pain medication for Black patients. In another study, 15% of LGBTQ+ Americans reported postponing or avoiding medical treatment due to discrimination. (Want to work on understanding and mitigating your implicit biases? Check out the links at the end of this article).  

Subconscious biases can manifest through the words clinicians use when talking about patients, both verbally and in documentation. A study reported in JAMA found that the majority of negative language in documentation fell into one or more of the following categories: questioning patient credibility, expressing disapproval of patient reasoning or self-care, stereotyping by race or social class, portraying the patient as difficult, or emphasizing physician authority over the patient.

This initial study prompts further questions. When a chart says that a patient “refused medication”, does the clinician automatically view the patient as difficult to work with? Do negative or stigmatizing words in a chart trigger bias in how clinicians interact with a patient? Can these biases influence the type and quality of care? Further research is needed to assess the impact that stigmatizing language has on patient outcomes. 

We believe health tech may offer solutions for addressing implicit bias. Machine learning technologies can be created to find biases in data before that data is used in other health-specific algorithms. Other times, when data from a diverse population is used for machine learning, the algorithms generated may be better at reducing unexplained disparities and offer better predictions of which treatment options to use for which patients. Natural language processing (NLP) can also be used by clinicians to identify SDOH information in unstructured chart notes. 

Now, machine learning is not without its faults. One issue with machine learning and health equity is how to make sure the data fed to the machines is unbiased and representative of the population. If 98% of the data used for machine learning is from men, how well can it provide information and recommendations for women patients? This could be reiterated for any different group of people. While there is not yet a perfect way to eliminate all bias, there are strategies for reducing bias in machine learning, such as being transparent about the selected training datasets, using mathematical approaches to de-bias information, and post-authorization monitoring. 

Do you have any examples of how health tech is being used to improve health equity? Share it with us on Twitter (@CareAlignAI), Instagram (@carealign.ai), or LinkedIn (CareAlign).

Tips for reducing bias:

Loading...