AI helps make predictions in critical areas, like the enforcement of the law, healthcare and Medical Imaging Analysis, or aids to determine whether a credit should be loaned to a potential borrower. Contesting these predictions and decisions often ends in highly complex explanations by the AI models in charge – a status quo that is now changing.

AI models tend to not be optimized, so that users can understand them. Which can be a big problem, as being able to follow the reasoning behind certain outcomes can be essential for making final decisions or determining if a mistake has been made. Social sciences have already defined guidelines to create good explanations, which is almost always overlooked by researchers in Explainable AI (XAI). Using these guidelines and insights to create trustworthy and explainable AI makes sense, especially when it comes to real life applications of XAI-solutions. Research at the Know Center reiterates on the problem and proposes experiments to empirically prove that tailoring explanations to be understood by the unique user is highly beneficial for the sciences and industry.

This presents not only an improvement in XAI methods but a necessity in all fields where XAI is used – starting with process optimization in collaboration with industry partners, who already see immense value for their business.