Efficient and independent test methods and testing technologies for AI systems which enable (semi)automatic evaluation of metrics will be developed as part of a strategic partnership between Austrian experts. Participants are Know-Center, SGS, IAIK Graz University of Technology and BANDAS University of Graz.

 

Creating confidence in AI

Artificial Intelligence has already changed products and services essentially and is one of the fastest growing topics. It is a key technology driver for ensuring future viability of economy and society. However, it may result in unintended and adverse consequences if not used adequately.

“The potential of AI in Europe will only be exploited if the trustworthiness of data handling as well as fair, reliable and secure algorithms can be demonstrated. With a 360° perspective, we want to ensure AI applications function in a technically compliant, reliable and unbiased manner. The focus is on all areas that are essential for high quality and trustworthiness of AI: data, algorithms, cybersecurity, processes, ethics and law,” explains Stefanie Lindstaedt, CEO of Know-Center.

 

Bundled expertise

Similar to the General Data Protection Regulation (GDPR), the European Commission is planning to introduce future regulation for AI systems, including a comprehensive conformity assessment by providers, which will make AI certification indispensable. The initiative’s goal is to help companies develop competitive and trustworthy AI-based products and systems and to reduce barriers when it comes to utilizing AI. A team of multidisciplinary experts covers all areas, from research and consulting to certification.

“A cornerstone of trust in AI is compliance with standards and regulations, demonstrated through conformity assessments, carried out by accredited third parties like SGS. In our partnership, we will develop new multi-disciplinary tools and techniques to enable these assessments, to include cybersecurity, safety and ethics as examples,” explains Siddi Wouters, Senior Vice President of Digital & Innovation at SGS, “which brings value to customers across the world”.

 

Cybercrime  – a major challenge

Despite the enormous technological potential, the use of AI applications also involves uncertainties and risks. There are a variety of ways to attack AI systems. “At this point conventional static testing is not sufficient. Research in terms of fundamentally new safety engineering concepts is needed to obtain continuous attestation of AI system’s resilience against cyberattacks. TU Graz introduces its expertise to the strategic partnership. For us, the initiative represents the logical deepening of an already successfully existing cooperation in the field of computer science, software engineering and cybersecurity with SGS, Know-Center and the University of Graz. In addition, it will benefit university research and teaching, which the new and current content will incorporate,” explains Harald Kainz, Rector of Graz University of Technology.

 

Open to further cooperation

Energie Steiermark AG, Leftshift One and REDWAVE will participate with use cases. The initiative is open to other partners from industry and science who are interested in working together on AI test methods and testing technologies.