Fair AI

Research Focus

The mission of the Fair AI Research Area is to research and develop fair, bias-free algorithms and evaluation methods that minimize the risks of discrimination and promote trust in AI. In close collaboration with the other Research Areas, we are working on the profound understanding of causal relationships underlying wrong decisions. Our goal is to develop fair algorithms and evaluation methods that are essential building blocks of trustworthy AI. We want to support users in their self-determined, critical, and informed decision-making and interaction with AI-based systems (e.g., recommender systems).
Biases in Data and Systems

Cognitive and social biases in human perception and behavior are captured, reflected, and potentially amplified by AI-based systems. Common examples include the promotion of popularity biases and polarization in online information, or the adoption of gender biases in search engines. Our research aims to understand such cognitive and social biases in user interactions and other forms of AI training data.

Fairness in Algorithmic Decision Support

To create the highest possible transparency in our partners’ systems, we explore the reasons for social and cognitive biases in algorithms and develop specific modeling techniques for individual (groups of) users. This will contribute to an ecosystem in which a high level of transparency and fairness is achieved through personalized algorithms.

Privacy-aware Recommendations

Recommender systems typically require a lot of personal and private data to calculate personalized recommendations. We develop privacy concepts for recommender systems and give users control over how much data they share. With highly innovative “obfuscation techniques” evaluated with our partners in sensitive environments like healthcare, we balance data accuracy and privacy.