Artificial intelligence is the foundation for solving the big problems of our time, from pandemics to climate change. It diagnoses diseases, directs our energy supply and optimizes the circular economy of the near future. The basis for this is trust in the safety and reliability of the technology.
For companies and individuals in Europe, the lack of trust is the greatest obstacle to the widespread use of artificial intelligence. Given the enormous potential of systematic data use for solving our global challenges, but also for increasing value creation in our economy, the question is how to gain this trust. The Know Center conducts research with the goal of making AI trustworthy and secure, creating benchmarks for AI development worldwide.
Recommendation systems serve as the basis for the success of online services such as Amazon, Netflix, or Spotify. They facilitate navigation through a large set of impressions and options. Recommendations of many systems are not based on the individual user, but on the tastes and preferences of the broad group. Therefore, they convey popular content and products, but often do not meet the needs and desires of individuals. This results in disadvantage.
AI should therefore be fair – but data-driven artificial intelligence is not automatically fair: It takes a good bit of effort to safeguard a recommendation system against favoring certain choices and to ensure fair behavior. At the Know Center, a dedicated research group is working on the topic of fair AI, identifying ways in which artificial intelligence can actually become fair and inclusive.
Recent recommendation systems developed by the Know Center already show how it could be done. They look at the user’s personal preferences and base their decisions on intelligent predictions or inferences. Fairness can be ensured with these personalized data sets. The special feature here is that the user’s data is never “seen” directly by the recommendation system. The results are calculated with completely encrypted data. This is a promising technology that also promises immense potential in logistics and healthcare.
Trustworthy AI is not only fair and secure, it is also understandable. If artificial intelligence can explain on request how it arrived at a decision or assessment, then it is much easier to trust. Modern data-driven methods, however, often work with highly complex statistical models whose inner workings are difficult to understand. Once trained with data, it is almost impossible to draw conclusions about the decision-making processes. The solution to this is suitable methods that visually represent artificial intelligence. This creates the possibility to see what their decisions are based on. Some technologies can even explain thought processes to humans in written or verbal exchanges. The Know Center is researching how humans can communicate optimally, openly, and understandably with artificial intelligence.
Trustworthy artificial intelligence and data are the key to sustainable and digital transformation in Europe. Artificial intelligence will only be accepted as a support in everyday life, as a partner in the work environment, and as a key to achieving our environmental goals if its technical basis is safe, transparent, fair, and comprehensible. The Know Center is working to turn this vision into reality.