A recently published scientific paper on the topic “Distributed Recommendation Systems” by an international team of researchers concludes that a lot of personal data is needed in order to make better recommendations. Our team of area Social-Computing replicated the study and subsequently refuted the result.

We encounter recommendation systems everywhere. Products, services and music are recommended to us on a daily basis. Traditionally, systems are organized in a centralized manner. Meaning sensitive data and models of service providers are administrated at external data processing centers. Data stored on our smartphone, e.g. ends up at data processing centers via third-party providers posing risks of user privacy violation.

Distributed recommendation systems: Private data do not leave smartphone & Co.

By means of “distributed recommendation systems” data remains stored directly on the end device (e.g. smartphone, laptop, smartwatch) and does not disappear from there. User interaction data also remain in place implicating far better user privacy protection.

No quality loss for less-interactive user recommendation

Research Area Manager of area Social Computing, Dominik Kowald on the interesting outcome at hand: “By using AI-supported methods such as federated learning & meta-learning we determined that virtually there was no loss of quality for users with less interaction data. No matter if they shared 100% of their data or only 30%. However, this was different in terms of users with lots of interaction data.”

As example:

 

The research work was carried out within the scope of the COMET module DDAI  and the EU-H2020 project TRUSTS. Thematically both of them outline the secure and explainable use of sensitive data. The resulting paper “Robustness of Meta Matrix Factorization Against Strict Privacy Constraints” was presented at ECIR 2021 during the “Federated Learning” session and is available free of charge.

Link to Paper