Workshop on Online Misinformation- and Harm-Aware Recommender Systems
Co-located with RecSys 2021
News: Proceedings are now available at CEUR!
We want to thank CEUR for supporting our work!
In recent years, there has been an increase in the dissemination of false news, rumors, deception and other forms of misinformation, as well as abusive language, incitements of violence, harassment and other forms of hate speech, throughout online platforms. In fact, these unwanted behaviours lead to online harms which have become a serious problem with several negative consequences, ranging from public health issues to the disruption of democratic systems. While these phenomena are widely observed in social media, they affect the experience of users on multiple online platforms. For example, collaborative filtering approaches in e-commerce sites are vulnerable to low-quality reviews, manipulation and attacks. In this regard, Amazon has been criticized for allowing vendors to promote white supremacist and anti-Semitic merchandise, which can foster hate crime. Moreover, PayPal monitors users' transactions to avoid serving users promoting hateful actions, regardless of whether their activities are illegal.
In the last year, the COVID-19 pandemic generated an increased need for information as a response to a highly emotional and uncertain situation. In this context, cases of misinformation linked to health recommendations have been reported during the COVID-19 pandemic (for example, different media outlets, and even politicians, recommended consuming hot beverages and chlorine dioxide for preventing the disease), which undermines the individual responses to COVID-19, compromises the efficacy of evidence-based policy interventions, and affects the credibility of scientific expertise with potentially longer-term (and even deadly) consequences. At the same time, actions were demanded to control the "tsunami'' of hate speech which is rife during the COVID-19 pandemic.
Recommender systems play a central role in the process of online information consumption as well as user decision-making by leveraging user-generated information at scale. In this role, they are both affected by different forms of online harms, which hinders their capacity of achieving accurate predictions and, at the same time, become unintended means for their spread and amplification. In their attempt to deliver relevant and engaging suggestions, recommendation algorithms are prone to introduce biases, and further foster phenomena such as filter bubbles, echo chambers and opinion manipulation. Some of these issues stem from the core concepts and assumptions on which recommender systems are based. For example, popularity and homogeneity bias are based on the reliance in frequency heuristics and the search of like-minded individuals, correspondingly. Biases in data, algorithm design, evaluation, and interaction limit the exposure of users to diverse points of view and make them more vulnerable to manipulation by misinformation and disinformation. Likewise, recommender systems can be affected by biases in the data (stemming from imbalanced data), the algorithms, and the user interaction or observation – with a focus on the biases related to relevance feedback loops (e.g., ranking).
Harnessing recommender systems with misinformation- and harm-awareness mechanisms becomes essential not only to mitigate the negative effects of the diffusion of unwanted content, but also to increase the user-perceived quality of recommender systems in a wide range of online platforms, going from social networks to e-commerce sites. Novel strategies like the diversification of recommendations, bias mitigation, model-level disruption, explainability and interpretation, among others, can help users in making informed decisions in the presence of misinformation, hate speech and other forms of online harm.