Disability and the Emergence of Digital Barriers to Health Screening

Sara M. Bergstresser, PhD, MPH, MS

Facebook
Twitter
LinkedIn

Citation

Bergstresser S. Disability and the emergence of digital barriers to health screening HPHR. 2021;44.  

DOI:10.54111/0001/RR3

Disability and the Emergence of Digital Barriers to Health Screening

Abstract

For the disability community, there are many barriers to inclusion in public health initiatives, including screening programs and illness prevention. Physical barriers and social biases have long been impediments to access, but with the rapid creation of new digital health technologies, digital barriers have now become an additional problem. This paper illustrates via example the ways in which new digital barriers to inclusion are emerging, and it outlines the ways in which existing forms of physical and social exclusion become encoded into digital health technologies. The conclusion outlines why disability inclusion in digital health is necessary for social justice and equality in public health.

Health systems and institutions have historically been designed based on a normative vision of an average person. For people with disabilities who differ from these assumed norms, exclusion and denial of needed health resources are often the result.1 These problems also reach screening and prevention initiatives within public health, which can present multiple types of barriers for the disability community. The physical world can lead to lack of access due to inaccessible buildings or medical equipment. Exclusion from health education can result from a lack of information presented in alternative formats such as large print or braille.2-3 Social bias and stigma can also lead to exclusion. For example, people with disabilities are less likely to be screened for cancer; this is particularly apparent in disparities for breast and cervical cancer screening, treatment, and survival for women with disabilities.4-5 One source of bias comes from preconceived notions about disabled people as asexual, which can lessen provider attention to breast and cervical health.

 

Now, a new digital health world is being created, and it is once again being based on and made for people who fit within the statistical representations and social notions of average and normal. There is already ample evidence that bias exists in algorithmic and machine-learning applications for health screening and care; those that have been identified so far are primarily related to notions of race, ethnicity, economic class, sex, and gender.6-9 In the area of disability, historic barriers of physical and social exclusion have been slowly addressed over time, including physical accessibility mandates in the Americans with Disabilities act (ADA), the presentation of public health information in multiple formats, and more education for health care providers. Nevertheless, disability bias continues, and multiple forms of bias often intersect.10 As the digital health world is being created anew, the same persistent biases are making their way into data sampling methods and algorithmic design.11-12 Many of the past lessons and initiatives to increase access have been ignored or forgotten within the digital world, leaving public health researchers and health providers scrambling to address the excluded populations that their new digital tools are not being built to serve.

Machine-Learning Models and Digital Exclusion: An Example

This example is based on an initiative to model risk for Clostridium difficile infection (CDI) based on electronic health records (EHR) and using a “generalizable machine-learning approach” that produces a model to be used for in-hospital screening.13 This example shows the specific process through which physical and social exclusion can translate directly into systematic exclusion in digital health, and it focuses on the emerging area of digital health tools that use artificial intelligence (AI) and machine-learning algorithms to predict and screen for illness and risk. In particular, the model of interest was developed to identify patients at high-risk for CDI so as to better target infection prevention strategies, and the aim was generalizability within institutions.13 The second aspect of this example is related to psychiatric disability. Individuals diagnosed with and hospitalized for severe mental illness constitute a population that has historically faced extreme social and physical exclusion via stigma and institutionalization in isolated facilities.

 

The CDI risk prediction model was developed at two major academic health centers. At one of these, the University of Michigan Hospitals (UM), the study population was defined as adult inpatients admitted to the University of Michigan Hospitals (UM) over a 6-year period (n=205,050 visits).13  From those data, discharges in fewer than three days (n=60,927), those who tested positive for CDI within two days of admission (n=797), and those with recent prior or duplicate CDI tests (n=1495) were excluded, so that 194,831 visits remained. These exclusions concerned variables directly related to in-hospital CDI cases, but the following exclusion was not. UM also excluded patients admitted to the inpatient psychiatric unit, which came to 3,817 excluded visits, or almost 2% of the 194,831 visit subtotal.

 

This nonrandom exclusion is problematic for multiple reasons. First, systematically excluding one population leads to a biased data sample; since this sample was used as the basis of subsequent UM model development, this also means that the model itself is biased in the same way. In addition, the existence of social and physical exclusion was taken as sufficient reason to further exclude this population digitally, showing a direct path by which historical types of exclusion can be encoded into the new digital health world. The study authors state: “This decision was based on the fact that psychiatric inpatients at UM are located in a secure region of the hospital isolated from other patients and caregivers.”13 Though this nonrandom exclusion of a vulnerable population was acknowledged in the methods section, and a list of many limitations was presented in the discussion, there was no acknowledgement that excluding admissions to the psychiatric inpatient unit was a limit on future generalizability to all adult inpatient admissions.

 

Though psychiatric admissions may at first seem largely irrelevant to a CDI-focused study, in reality, there are circumstances where psychiatric symptoms and increased risk for CDI are associated. For example, individuals with inflammatory bowel disease (IBD) are at higher risk for CDI, and psychiatric disorder is a frequent comorbidity of IBD.14 There are many other possible upstream determinants, though because psychiatric populations are often excluded from research on physical diseases, these potential pathways remain under-researched.

Conclusion

Parikh et al.8 describe two types of bias in artificial intelligence and health care: social bias and statistical bias. Social bias refers to inequity in care that systematically leads to suboptimal outcomes for a particular group, and it can be caused by human factors such as implicit or explicit bias. Statistical bias, on the other hand, results from factors including suboptimal sampling, measurement error, and heterogeneity of effects. The previously discussed example, where psychiatric inpatient visits were excluded from an infection-control risk screening model, shows evidence of both types of bias. Social bias lead the UW model developers to dismiss this population as expendable, rationalizing the exclusion based on the inconvenience of accessing a physically excluded population located in a locked hospital ward. This nonrandom exclusion lead to a biased sample. Statistical bias was subsequently introduced or perhaps even exacerbated because the machine-learning algorithm and final screening model were based on this biased sample.

 

While AI and machine-learning based screening models hold great potential promise for the future of rapid and accurate disease-specific screening, they are limited by their production via exclusion and biased sampling. In the making of the new world of digital health, the exclusion of disability and other marginalized populations must be remedied rapidly. If this is not prioritized, vast disparities in public health will persist, including the continued exclusion of a disproportionate number of disabled people from health screening initiatives. Mathematical solutions for algorithmic fairness are not enough, since they do not account well for complex causal relationships between biological, environmental, and social factors, social determinants of health, or the structural factors that affect health across multiple intersecting identities.15 The only path towards social justice and equality in public health is to include disabled and other marginalized populations from the start, and to sustain this inclusion throughout the entire process of developing new digital health technologies. Disability inclusion as an afterthought will not suffice.

Disclosure Statement

The author(s) have no relevant financial disclosures or conflicts of interest.

References

  1. Guidry‐Grimes L, Savin K, Stramondo JA, et al. Disability rights as a necessary framework for crisis standards of care and the future of health care. Hastings Center Report. 2020;50(3):28-32.
  2. Raynor DK, Yerassimou N. Medicines information-leaving blind people behind? BMJ : British Medical Journal. 1997;315(7103):268.
  3. Iezzoni LI, McCarthy EP, Davis RB, Siebens H. Mobility impairments and use of screening and preventive services. Am J Public Health. Jun 2000;90(6):955-61.
  4. Courtney-Long E, Armour B, Frammartino B, Miller J. Factors Associated with Self-Reported Mammography Use for Women With and Women Without a Disability. Journal of Women’s Health. 2011/09/01 2011;20(9):1279-1286.
  5. Steele CB, Townsend JS, Courtney-Long EA, Young M. Prevalence of Cancer Screening Among Adults With Disabilities, United States, 2013. Prev Chronic Dis. 2017;14:E09-E09.
  6. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453.
  7. Panch T, Mattie H, Atun R. Artificial intelligence and algorithmic bias: implications for health systems. J Glob Health. 2019;9(2):010318-010318.
  8. Parikh RB, Teeple S, Navathe AS. Addressing Bias in Artificial Intelligence in Health Care. JAMA. 2019;322(24):2377-2378.
  9. Cirillo D, Catuara-Solarz S, Morey C, et al. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. npj Digital Medicine. 2020/06/01 2020;3(1):81.
  10. Dopson R, Crossroads: Conversations about Race, Gender & Disability. HPHR Blogs. https://hphr.org/dopson-blog-1/. Accessed November 15, 2021.
  11. Trewin S, Basson S, Muller M, et al. Considerations for AI fairness for people with disabilities. AI Matters. 2019;5(3):40-63.
  12. Whittaker M, Alper M, Bennett CL, et al. Disability, bias, and AI. AI Now Institute. 2019;
  13. Oh J, Makar M, Fusco C, et al. A Generalizable, Data-Driven Approach to Predict Daily Risk of Clostridium difficile Infection at Two Large Academic Health Centers. Infect Control Hosp Epidemiol. 2018;39(4):425-433.
  14. Gracie DJ, Williams CJM, Sood R, et al. Poor Correlation Between Clinical Disease Activity and Mucosal Inflammation, and the Role of Psychological Comorbidity, in Inflammatory Bowel Disease. The American Journal of Gastroenterology. 2016;111(4):541-551.
  15. McCradden MD, Joshi S, Mazwi M, Anderson JA. Ethical limitations of algorithmic fairness solutions in health care machine learning. The Lancet Digital Health. 2020;2(5):e221-e223.

About the Authors

Sara M. Bergstresser, PhD, MPH, MS

Dr. Sara M. Bergstresser is currently Lecturer in the Masters of Bioethics program at Columbia University. She earned a PhD in Anthropology from Brown University, an MPH from Harvard School of Public Health, and an MS in Bioethics from Columbia University. Her research addresses the intersection of health and society, including global bioethics, mental health policy, stigma, disability studies, social justice, and structural inequalities.