Better algorithms and faster innovation. Xavier enables deep-learning model errors to be avoided, optimal use of different models, and better results.

A tool developed at the Know Center now interactively helps users deeply understand the conclusions of deep-learning models while optimizing and speeding up explanations.

Deep-learning models have enabled great advances in the fields of autonomous driving, patient diagnostics, language translation, and many more – but they are incredibly complex. Because of this complexity, humans often fail to understand why an algorithm arrives at certain decisions and what background influences the AI findings. Deficiencies or errors in the evaluation can thus not be identified immediately, which can have serious consequences. To address this, the Know Center developed XAIVIER.

XAIVIER, a web application for interactive explainable AI focused on time series, helps users explore and properly select an appropriate explanatory method for their dataset and model. Proper selection of an appropriate method is very important because it is the only way to effectively detect potential errors in the model.

The tool is unique in that it provides an explanation and recommendation service in the process.

The application provides information on why an explanation method is suitable for the task at hand, or even why it should be avoided. The application can thus assist the user in many ways, such as recognizing if a model has learned incorrectly, verifying if the model makes predictions based on the correct data characteristics, and much more.

The demo of XAIVIER is now publicly available and awaits first applications by curious users: