Life in the Margins

Heather Tillewein Headshot

By Dr. Heather Tillewein

Artificial Intelligence and Gender Identity

Artificial Intelligence

Artificial Intelligence (AI) is defined as the use of technology, including computers, to simulate intelligent behavior and critical thinking that is deemed comparable to human beings.1 AI systems rely on algorithms and are trained through labeled data; this type of system is used with face recognition software.4 Organizations and corporations may use automated facial analysis, which includes facial recognition, face detection, and face classification.4 AI has also been used in instances to determine what individuals to hire, fire, who to give a loan to, and used to determine how long a person spends in prison.This new, rapid emerging field was developed to help close the equity gap and progress society, but is it actually having the opposite effect?

Gender and Racial Biases

Using labeled data (such as word embedding) to train AI systems can directly and indirectly create gender biases within AI applications.3 Word embedding is using text data to teach machines and creates natural language processing tasks.3 Gender stereotypes and gender biases can be created by word embedding through AI. Also, creating word associations/labeling gender neutral words are often lineated with gender definitions (e.g. receptionist and female or homemaker and female).3

 

Besides gender biases, there has been an influx of race biases in Artificial Intelligence technology. With AI, benchmarks are defined by binary labels. These binary labels are used to define gender classes, such as male or female or with race, such as darker or lighter. There are issues with using phenotypes of race, ethnicity, and individual features. Individual features can vary within different racial or ethnic categories and these categories are not consistent within various geographic locations.4

 

There are around 117 million Americans that are in law enforcement’s face recognition networks. Garvie and colleagues determined that African Americans are more likely to be stopped by law enforcement and be subjected to face recognition searches.6 Furthermore, facial recognition systems have misidentified women, young people ages 18-30, and people of color at high rates.7

Figure 1: Pilot Parliaments Benchmark (PPB) Dataset4
Figure 1: Pilot Parliaments Benchmark (PPB) Dataset4

Implications

With the issues of misidentification with women, young people, and people of color, we must wonder how AI will affect those whose gender is non-binary, transgender, or of a gender minority group. Those individuals who are gender nonconforming or transgender have higher rates of mental health problems that are linked to gender minority stress. This is due to stigma, discrimination, and the social stressor of being misgendered.2,5,8 As public health professionals, in order to protect the well-being of gender minority groups, we need to ask:

  • What is the prevalence of AI misidentifying gender minority groups and would this cause more mental health harm to these individuals?
  • How can those developing AI be more inclusive to gender minority groups?
  • What is the impact of AI misgendering of gender minority groups?
  • If gender minority groups cannot participate in AI, what impact would this have in their overall health?
  • Would gender minority groups be behind due to not having the same access to health technology as the general population?

 

Artificial Intelligence was thought to be the tool to close the gap in health equity and other various aspects of daily living. In fact, AI has been shown to create a racial and gender gasp within the general population.

 

The call is this: for future research to be done to see how AI might impact gender minority populations, as well as for the development of bias-free AI.

References

  1. Amisha, Malik, P., Pathania, M., Rathaur, V. (2019, July 31). Overview of artificial intelligence in medicine. Journal of Family Medicine and Primary Care, 8 (7), 2328-2331. https://dx.doi.org/10.4103%2Fjfmpc.jfmpc_440_19

 

  1. Bockting, W. O., Miner, M. H., Swinburne Romine, R. E., Hamilton, A., Coleman, E. (2013, May). Stigma, mental health, and resilience in an online sample of the US transgender population. American Journal of Public Health, 103(5), 943-951. https://dx.doi.org/10.2105%2FAJPH.2013.301241

 

  1. Bolukbasi, T., Change, K., Zou, J., Saligrama, V., & Kalai, A. (2016, December 5). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. NIPS’16: Proceedings of the 3oth International Conference on Neural Information Processing Systems, Barcelona, Spain.

 

  1. Boulamwini, J. & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81 (1), 1-15. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

 

  1. Cooper, K., Russell, A., Mandy, W., & Butler, C. (2020, August). The phenomenology of gender dysphoria in adults: A systematic review and meta-synthesis. Clinical Psychology Review, 80(1), 1-35. https://dx.doi.org/10.1016%2Fj.cpr.2020.101875

 

  1. Garvie, C., Bedoya, A., & Frankle, J. (2016, October 18). The Perpetual lineup: Unregulated police face recognition in America. Georgetown Law Center on Privacy and Technology. https://www.perpetuallineup.org/

 

  1. Klare, B., Burge, M., Klontz, J. C., Vorder Bruegge, R. W., & Jain, A. K. (2012, December). Face recongintion performance: Role of demographic information. IEEE Transactions on Information Forensics and Security, 7(6), 1789-1801. http://dx.doi.org/10.1109/TIFS.2012.2214212

 

  1. Meyer, I. H. (2015), Resilience in the study of minority stress and health of sexual and gender minorities. Psychology of Sexual Orientation and Gender Diversity, 2(3), 209-213. https://psycnet.apa.org/doi/10.1037/sgd0000132