Biases in Data and Systems
Cognitive and social biases in human perception and behavior are captured, reflected, and potentially amplified by AI-based systems. Common examples include the promotion of popularity biases and polarization in online information, or the adoption of gender biases in search engines. Our research aims to understand such cognitive and social biases in user interactions and other forms of AI training data.
Fairness in Algorithmic Decision Support
To create the highest possible transparency in our partners’ systems, we explore the reasons for social and cognitive biases in algorithms and develop specific modeling techniques for individual (groups of) users. This will contribute to an ecosystem in which a high level of transparency and fairness is achieved through personalized algorithms.
Recommender systems typically require a lot of personal and private data to calculate personalized recommendations. We develop privacy concepts for recommender systems and give users control over how much data they share. With highly innovative “obfuscation techniques” evaluated with our partners in sensitive environments like healthcare, we balance data accuracy and privacy.