Secure data exchange and data security open the door to open exchange in science and industry. In this context, computing with completely encrypted data is forward-looking.

Transfer learning enables the training of reliable AI models, despite a small data pool – a very efficient method to still obtain accurate evaluations in such cases. An appropriate AI model is pre-trained with a large data set and the learned knowledge is transferred to the small data set. The pre-trained model does not have to be completely retrained, but can deliver very accurate results with minor adjustments and even little data. However, a weak point here is again data protection.

From trained models, the training data can often be reconstructed with just a few steps. If, for example, a company wants to provide its suppliers with a pre-trained model for their own AI evaluations, there is a risk that the data used will become public. The Know Center has therefore developed the CrypotTL framework.

The framework combines transfer learning with homomorphic encryption, an encryption method that allows computations to be performed with completely encrypted data. This means that right from the start, work is done with completely encrypted data sets and models. CryptoTL not only protects the large data set for pre-training an algorithm or the algorithm itself, but also the small data set is only sent encrypted to the pre-trained model. This enables insights that would not be tangible without this approach for privacy reasons.

In the model, the otherwise computationally intensive and rather inefficient homomorphic encryption is applied only to a portion of the algorithm. As a result, fast sub-second runtimes can be achieved on commercial devices such as notebooks. Thus, nothing stands in the way of CryptoTL’s application in actual use cases.