Human-AI Interaction

Research Focus

The Human-AI Interaction research area works on interactive machine learning techniques, immersive and visual analytics, and AI-driven computer user interface methods that promote mutual understanding and cooperation between humans and AI. For this to succeed, both humans and AI must be able to communicate with each other, provide feedback, and act on that knowledge. The result for us is the use of an AI that enables fluid cooperation between humans and machines and gives us the freedom to use our human strength of genuine creativity.
Human-AI Mixed Initiative Interaction

This research area develops novel human-in-the-loop methods (interfaces, models, and systems) that enable humans to obtain information about AI decisions and provide feedback. Likewise, AI will be enabled to solicit human input to improve the performance of the human-machine team.

Explainable AI

The goal of this research direction is to develop explainable algorithms and to provide the user with methods for interpreting the AI’s learning processes and reasoning. Explainable AI plays a key role in trustworthiness and acceptance of AI. Research focuses on algorithms for computing explainable features, methods for validating and analyzing them, and methods for explaining these features in an understandable form.

Human-Aware AI Models

Our goal is to develop intelligent systems that can “reflect” their actions in terms of human perception, intentions, and other, contextual factors. We are interested in collecting data and generating data-driven models to understand what people’s affective and emotional states are in different interactions with technology and how they influence the interaction. Such systems should be able to adapt to different human states while generating models of human intentions and behaviors.