Recently, another type of AI has emerged: so-called generative AI. It focuses on creating new content based on big data. In particular, generative AI chatbots are experiencing an unprecedented boom, with tech giants outpacing each other with even more powerful models and corporate adoption of the technology rapidly gaining momentum.

 

One in two U.S. companies is using AI

In early 2023, the job search platform ResumeBuilder.com surveyed a thousand U.S. executives to find out how generative AI – specifically ChatGPT – has changed the corporate landscape. About 50% of the companies surveyed are actively using ChatGPT to run their businesses, and about 48% say they have already replaced employees with the technology – with many more doing so now.

Global integration of generative AI is accelerating tremendously. Europe in particular is leading the way with legislative efforts (AI Act). These are intended to mitigate the downsides of the rapid innovation of generative AI and implement guidelines for responsible business practices and adaptations.

 

Generative AI is upending business models

Generative AI will completely transform the way our businesses and economy operate. It will redefine the way we interact with software and fundamentally reshape business structures, job descriptions, and more. Technology is already generating text, speech, video, music, images, and code. Exciting new possibilities emerge when user- or company-specific data enters the mix.

Explosive power like dynamite or steam engines once had

Roman Kern, CSO of the Know Center, compares, “Generative AI has the potential to be the new ‘telephone,’ which we all know has fundamentally changed the way we communicate. Generative AI will fall under a similar metric, especially for our economy. As a result, companies that don’t use AI will be replaced by companies that do.”

 

While the general hype is reaching a constant peak, companies and their leaders still need to be intentional about using these technologies. AI legislation, such as the AI Act, is on the way, and issues such as data security and trustworthiness need to be further examined and treated with caution. Blind trust or overzealous implementation may ignore real consequences and drawbacks associated with the current state of the technology.

 

Bizarre breeding ground for hallucinated facts

The content created by systems like ChatGPT has raised important questions about accuracy, security, and trustworthiness. When pushed to perform extreme or more demanding tasks, AI models tend to produce bizarre responses or so-called “hallucinations.” These answers would often be confidently packaged as facts, making the false information incredibly difficult for users and sometimes even domain experts to detect.

 

7 Principles of trustworthy AI

Training data for AI is often subject to bias, or false information can get into the mix. AI bias can have malicious consequences for affected user groups. Unjustified discrimination and dangerous half-truths are just the tip of the iceberg. Logically, this requires the development of safeguards, approaches, regulations, and modern methods that mitigate the technology’s most pressing problems. There is a call for trustworthy AI, an umbrella term that describes seven essential aspects that AI should fulfill: human action as well as control, technical robustness as well as security, privacy as well as data management, transparency, diversity including non-discrimination as well as fairness, social as well as environmental well-being and accountability.

 

Deep learning with deep fake risk

Generative AI is based on deep-learning models. These are complex and their decision-making process is hardly traceable. This lack of transparency can lead to problems. If you can’t understand how an AI system reached conclusions, you won’t be able to spot errors before they have potentially devastating consequences. Modern approaches are trying to get to the root of this problem. In particular, the new research field of XAI (Explainable AI). It aims to open up such “black box” AI models and make their decisions transparent, understandable, and thus more trustworthy for humans.

 

Better look behind the scenes of AI with XAI

Currently, generative AI does not provide insight into how it generates responses. It does not allow users to distinguish between trustworthy and false (or invented) information. The goal of XAI for generative AI models is to enable experts and users to understand the reasons behind their specific results and make informed decisions about reliability and quality. Insight into these models also has the benefit of actually identifying existing social biases and discrimination. AI adopts training data without further comment. This means discriminatory and unfair conditions that existed long before the initial training are simply recycled. Understandable AI enables unfairness, discrimination, and “blind spots” to be identified and initial steps to be taken to eliminate them.

 

Only explainable is trustworthy

The application of XAI to generative AI models, therefore, holds great potential. Some of the problems mentioned could be solved by XAI approaches.

Roman Kern: “Generative AI is not infallible and still dependent on human intervention. AI chatbots will explode in many areas of human activity. Developing and integrating reliable XAI methods to assess the trustworthiness of generated information is, therefore, more important than ever.”

 

Science already has some answers to the most burning questions of fair and trustworthy AI development, especially in an economic context. XAI is just one approach of many that will enable AI to be used sustainably in business and society. Our experts think ahead and point out risks as well as opportunities for the economy and society.