The acceptance – and thus also the competitive and locational advantage – of the use of AI in companies and also public institutions will depend to a large extent on how quickly it is possible to increase trust in such applications.

The biggest concerns about the use of AI in business and society relate to data privacy, data security and the traceability of results.

On the one hand, this research project is about developing AI algorithms and machine learning models that can be used and shared along value chains without giving away confidential information. On the other hand, we are also designing algorithms in such a way that they are verifiable and explainable, so that specialists can check the correctness of the results at any time when using such systems.

Last but not least, this also makes it easier for users to interact with the AI, giving them control over the entire process – a key aspect of trust.

With our world-class positioning around the development of trusted AI and cryptographic techniques, we are the first point of contact for companies when deploying such solutions.

COMET-Modul

The new COMET modules of the Austrian Research Promotion Agency FFG contribute significantly to development and innovation. The aim is to establish forward-looking research topics and build up new fields of strength. By conducting research at the highest level, new topics are being established that go well beyond the current state of the art.

 

The 4 million euro module runs for 4 years, the official project kick-off including press conference took place in Graz on February 10, 2020.

„With the COMET module on Artificial Intelligence, this focus can be specifically expanded at the Know Center in Graz..“

Henrietta Egerth and Klaus Pseiner, Managing Directors of the Austrian Research Promotion Agency (FFG)

Research areas

Privacy-oriented AI algorithms

We develop secure AI methods that do not reveal sensitive information and also allow the evaluation of encrypted data. This allows us, for example, to enable secure cloud computing, AI models can be shared more easily with customers and suppliers, and private and public databases can be combined without security risks. We prioritize data protection and are working on new generations of secure and confidentiality-preserving AI algorithms.

Explainable AI for analysts

AI solutions should not be a black box for users, we are working on making the decisions of the algorithms used more understandable for analysts without revealing confidential data. In this way, trust in AI decisions should be strengthened, models should be better visualized, and the corresponding results should be easier to explain.

Explainable AI for users

End-user interaction with AI is becoming increasingly important and we are researching how to improve this interaction in the future. For example, we are improving the explainability of personalized recommender systems and also developing new learning paradigms for dealing with machine learning to better train and empower users in dealing with AI.

Data Driven Artificial Intelligence (DDAI)

The COMET module DDAI is funded by BMK and BMDW within the framework of COMET – Competence Centers for Excellent Technologies. The COMET program is managed by the FFG.