The AI-based language model GPT-3, “Generative Pre-Trained Transformer 3”, is currently getting a lot of attention via the Chat-GPT platform. Chat-GPT enables the generation of human-like text based on simple queries. However, one drawback of GPT-3 is that it is trained on human-generated data available on the Web and therefore, is prone to biases.
AI-based systems are, by nature, prone to biases, which could lead to unfair behavior of specific user groups. To address this, we practice basic and applied research in two main areas: (i) defining metrics and methods to measure and detect biases, and (ii) developing bias mitigation methods and fairness-aware AI algorithms. For the former, we plan to work towards the quality assurance and certification of AI within our Trust-your-AI initiative (https://trustyour.ai/). For the latter, we plan to design and develop in-, pre-, and post-processing methods to achieve fairness-aware AI algorithms.
Pre-processing methods operate on the input data of AI algorithms to achieve fair data representations, while in-processing methods directly try to adapt the inner workings of the AI algorithms themselves to achieve fairness.
We develop post-processing methods with the aim to change the output of the AI-based systems towards fairness, for example by re-ranking a recommendation list. Our approach may enforce fairness in these systems, setting an important standard for future innovations like ChatGPT or other AI-based inventions.
 Scher, S., Kopeinik, S., Truegler, A., & Kowald, D. (2023). Modelling the Long-Term Fairness Dynamics of Data-Driven Targeted Help on Job Seekers. Nature Scientific Reports.
 Kopeinik, S., Mara, M., Ratz, L., Krieg, K., Schedl, M., & Rekabsaz, N. (2023). Show me a “Male Nurse”! How Gender Bias is Reflected in the Query Formulation of Search Engine Users. In Proceedings of the 2023 Edition of the ACM CHI Conference on Human Factors in Computing Systems.