Human-AI Mixed Initiative Interaction
This research area develops novel human-in-the-loop methods (interfaces, models, and systems) that enable humans to obtain information about AI decisions and provide feedback. Likewise, AI will be enabled to solicit human input to improve the performance of the human-machine team.
The goal of this research direction is to develop explainable algorithms and to provide the user with methods for interpreting the AI’s learning processes and reasoning. Explainable AI plays a key role in trustworthiness and acceptance of AI. Research focuses on algorithms for computing explainable features, methods for validating and analyzing them, and methods for explaining these features in an understandable form.
Human-Aware AI Models
Our goal is to develop intelligent systems that can “reflect” their actions in terms of human perception, intentions, and other, contextual factors. We are interested in collecting data and generating data-driven models to understand what people’s affective and emotional states are in different interactions with technology and how they influence the interaction. Such systems should be able to adapt to different human states while generating models of human intentions and behaviors.