Trustworthy AI is a concept that refers to the creation of Artificial Intelligence that operates ethically and responsibly and is focused on the safety and protection of people and their data. Trustworthy AI is intended to help build trust in AI and its applications.

Developing AI according to trustworthy design principles, or deploying trustworthy AI, pays off in two ways: On the one hand, it ensures that legal requirements that will be placed on AI solutions in the near future are met and thus investments in AI technology are also future-proof. On the other hand, this is the only way to gain the trust of users, which will be a key factor for market success in the future.

Accompaniment on the way to trustworthy AI

  • Legal and technical advice on the topic of trustworthy AI
  • AI self-assessment tools as a risk initial analysis tool for companies
  • Accompanying companies in the field of research
  • Development of metrics/certification criteria
  • Trainings & Lectures
  • Technological support
  • Certifications, seals of approval
Know Center 360° Model

The Know Center 360° model was created following the core requirements of the High-Level Expert Group on the use of AI. Our interdisciplinary research will address robustness, security, data governance, transparency, diversity, non-discrimination, fairness, and human oversight over the long term.

The High-Level Expert Group was set up by the European Commission to advise it on its strategy for artificial intelligence and is putting forward ethics guidelines to guide AI towards sustainability, growth, competitiveness and inclusion. These ethics guidelines serve as a blueprint for the AI Act, which is currently being drafted.


Do you have any questions? We will gladly clarify them!

Do you want to know whether your AI systems and digital processes comply with planned AI legal requirements? Then contact us. We will be happy to advise you and support you in the selection, implementation and optimization of your AI systems.

Principles of trustworthy AI

Priority of human action and human supervision
e.g. fundamental rights, primacy of human action and human supervision

Technical robustness and safety
e.g. resistance to attacks and security breaches, containment plan and general safety, precision, reliability and reproducibility.

Privacy protection and data quality management
e.g. respect for privacy, quality and integrity of data, and data access

e.g. traceability, explainability and communication

Diversity, non-discrimination and fairness
e.g. avoidance of unfair distortions, accessibility and universal design, and stakeholder participation.

Social and environmental well-being
e.g. sustainability and environmental protection, social impact, society and democracy

e.g. verifiability, minimization and reporting of negative impacts, trade-offs and remedies