AI is used as the backbone for many applications that we use daily such as search engines, decision support systems, recommender systems, or chatbots. The underlying AI algorithms employed in these applications are often trained with data generated by humans – for example click logs or written articles. Since, by nature, users are prone to the so-called confirmation bias or stereotypes, data generated by humans reflect these issues.

The AI algorithms trained on this data adopt these prejudices, which may lead to unfair outcomes and result in a direct disadvantage for underrepresented user groups and similar. An example could be recommender system that supports employers in finding candidates for a given job. Biased AI algorithm could lead to candidate recommendations that are heavily biased toward a historically privileged user group – such as young white males for an IT job. This then, of course, leads to unfair treatment of other users, although they might be a good or maybe even better fit for the job. In a recent publication, “Modelling the Long-Term Fairness Dynamics of Data-Driven Targeted Help on Job Seekers”, published in “Nature Scientific Reports” [1], we investigated fairness dynamics in the job domain. Here, we found that AI-driven decision support could lead to fairness issues in the long term and thus, the use of AI needs to be carefully monitored in such a setting. Other examples can be found in AI-driven search systems. Here, AI models are used to learn an accurate ranking of documents fulfilling a specific search query.

Again, a biased AI algorithm could lead to rankings that lead to discrimination against historically disadvantaged user groups or content providers. In a recent work, “Show me a “Male Nurse”! How Gender Bias is Reflected in the Query Formulation of Search Engine Users” that will be presented at “ACM CHI Conference on Human Factors in Computing Systems” [2], we have studied if search engine users themselves reflect these biases when formulating search queries. We found clear evidence that this is the case, especially with respect to gender bias and the mention of a gender that does not conform to stereotypes (e.g., a “male nurse”).

[1] Scher, S., Kopeinik, S., Truegler, A., & Kowald, D. (2023). Modelling the Long-Term Fairness Dynamics of Data-Driven Targeted Help on Job Seekers. Nature Scientific Reports.
[2] Kopeinik, S., Mara, M., Ratz, L., Krieg, K., Schedl, M., & Rekabsaz, N. (2023). Show me a “Male Nurse”! How Gender Bias is Reflected in the Query Formulation of Search Engine Users. In Proceedings of the 2023 Edition of the ACM CHI Conference on Human Factors in Computing Systems.