Accepted contributions

An empirical analysis of recommender systems robustness to shilling attacks

Anu Shrestha, Francesca Spezzano and Maria Soledad Pera

Recommender systems play an essential role in our digital society as they suggest products to purchase, restaurants to visit, and even resources to support education. Recommender systems based on collaborative filtering are the most popular among the ones used in e-commerce platforms to improve user experience. Given the collaborative environment, these recommenders are more vulnerable to shilling attacks, i.e., malicious users creating fake profiles to provide fraudulent reviews, which are deliberately written to sound authentic and aim to manipulate the recommender system to promote or demote target products or simply to sabotage the system. Therefore, understanding the effects of shilling attacks and the robustness of recommender systems have gained massive attention. However, empirical analysis thus far has assessed the robustness of recommender systems via simulated attacks, and there is a lack of evidence on what is the impact of fraudulent reviews in a real-world setting. In this paper, we present the results of an extensive analysis conducted on multiple real-world datasets from different domains to quantify the effect of shilling attacks on recommender systems. We focus on the performance of various well-known collaborative filtering-based algorithms and their robustness to different types of users. Trends emerging from our analysis unveil that, in the presence of spammers, recommender systems are not uniformly robust for all types of benign users.

"To trust a LIAR”: Does Machine Learning really classify fine-grained, Fake News statements?

Mark Mifsud, Colin Layfield , Joel Azzopardi and John Abela

Fake news refers to deceptive online content and is a problem which causes social harm. Early detection of fake news is therefore a critical but challenging problem. In this paper we attempt to determine if state-of-the-art models, trained on the LIAR dataset can be leveraged to reliably classify short claims according to 6 levels of veracity that range from “True” to “Pants on Fire” (absolute lies). We investigate the application of transformer models BERT, RoBERTa and ALBERT that have previously performed significantly well on several natural language processing tasks including text classification. A simple neural network (FcNN) was also used to enhance each model’s result by utilising the sources’ reputation scores . We achieved higher accuracy than previous studies that used more data or more complex models. Yet, after evaluating the models’ behaviour, numerous flaws appeared. These include bias and the fact that they do not really model veracity which makes them prone to adversarial attacks. We also consider the possibility that language-based, fake news classification, on such short statements is an ill-posed problem.

PaRIS: Polarization-aware Recommender Interactive System

Mahsa Badami and Olfa Nasraoui

One phenomenon that has been recently observed online is the emergence of polarization among users on social networks, where the population gets divided in groups with opposite opinions. As recommender system algorithms become more selective in filtering what users see and discover, one important question arises: Could recommender system algorithms become more selective in filtering what users see and discover? In this paper, we propose a new counter-polarization approach for existing Matrix Factorization based recommender systems. Our work represents another step toward counteracting polarization in human-generated data and Machine Learning algorithms.

User Polarization Aware Matrix Factorization for Recommendation Systems

Wenlong Sun and Olfa Nasraoui

User feedback results in different rating patterns due to the users’ preferences, cognitive differences, and biases. However, little research has taken into account cognitive biases when building recommender systems. In this paper, we propose novel methods to take into account user polarization into matrix factorization-based recommendation systems, with the hope to produce algorithmic recommendations that are less biased by extreme polarization. Polarization is an emerging social phenomenon with serious consequences in the era of social media communication. Our experimental results show that our proposed methods outperform the widely-used methods while considering both rank-based and value-based evaluation metrics, as well as polarization-aware metrics.

The Privacy versus Disclosure Appetite Dilemma: Mitigation by Recommendation (Ben Salem, Aïmeur, Hage)

Rim Ben Salem , Esma Aïmeur and Hicham Hage

Social Networking Sites (SNS) are growing exponentially and have undoubtedly become an intrinsic part of our lives. This is accompanied by a spike in the amount of time spent by users online, namely teenagers and young adults who allocate an average of three hours a day for various types of SNS. From commenting, posting sharing opinions to selfies and videos, they expose numerous pieces of personal information, jeopardizing their privacy. Specifically, this self-disclosure is driven by various factors including the individual’s sharing needs, the information being shared as well as the target audience. As such, it is a multi-layered issue to which the one-size-fits-all interventions, currently being used, have not proven to be most effective. This paper proposes a novel harm-aware recommender system-based solution to help the user take privacy-preserving decisions while using social media. This novel take on self-disclosure mitigation leverages notions from behavioural economics as well as psychological measurements. One of the contributions of this paper is the conception of the disclosure appetite, which is a user-specific term that encompasses their perception and their drive to reveal their private information. The tailored privacy-aware recommendations rely on the psychometric value that is the disclosure appetite as well as the sensitivity of the data being revealed. Through a trade-off between the two parameters, the system aims to mitigate the privacy compromise while considering the preferences of the user. In the era of oversharing and living virtually, this system that handles private data, with the intention of preserving it, is more crucial than the common uses of recommenders. In most of those cases, the extent of personal information exchanged does not go beyond preferences. Whereas the consequences and preventive potential of this work can have a bigger impact on multiple facets of the users’ lives. Finally, an empirical evaluation is conducted using participants from the US, Canada and Europe.

Adversarial Attacks against Visual Recommendation: an Investigation on the Influence of Items’ Popularity

Vito Walter Anelli , Tommaso Di Noia, Eugenio Di Sciascio, Daniele Malitesta and Felice Antonio Merra

Visually-aware recommender systems (VRSs) integrate products’ image features with historical users’ feedback to enhance recommendation performance. Such models have shown to be very effective in different domains, ranging from fashion, food, to point-of-interest. However, test-time adversarial attack strategies have recently unveiled severe security issues on these recommender models. Indeed, adversaries can harm the integrity of recommenders by uploading item images with human-imperceptible adversarial perturbations capable of pushing a target item into higher recommendation positions. Given the importance of items’ popularity on the recommendation performance, in this work, we evaluate whether there is an influence of items’ popularity on the attacks’ effectiveness. To this end, we perform three state-of-the-art adversarial attacks against VBPR (a standard VRS) by varying the adversary knowledge (white- vs. black- box) and capability (the magnitude of the perturbation). The results obtained evaluating attacks on two real-world datasets shed light on the remarkable efficacy of the attacks against the least popular items’ when planning novel defenses