Workshop on Online Misinformation- and Harm-Aware Recommender Systems

Co-located with RecSys 2021


2nd October

Amsterdam, Netherlands

Topics of interest

The aim of this workshop is to bring together a community of researchers interested in tackling online harms and, at the same time, mitigating their impact on recommender systems. We will seek novel research contributions on misinformation- and harm-aware recommender systems. The main objective of the workshop is to further research in recommender systems that can circumvent the negative effects of online harms by promoting recommendation of safe content and users.

In this second edition, the workshop aims at furthering research in recommender systems that can circumvent the negative effects of online harms by promoting the recommendation of safe content and users, with a special interest in research tackling the negative effects of recommending fake or harmful content linked to the COVID-19 crisis.


We solicit contributions in all topics related to misinformation- and harm-aware recommender systems, focusing on (but not limited to) the following list:


  • Reducing misinformation effects (e.g. echo-chambers, filter bubbles).

  • Online harms dynamics and prevalence.

  • Computational models for multi-modal and multi-lingual harm detection and countermeasures.

  • User/content trustworthiness.

  • Bias detection and mitigation in data/algorithms.

  • Fairness, interpretability and transparency in recommendations.

  • Explainable models of recommendations.

  • Data collection and processing.

  • Design of specific evaluation metrics.

  • The appropriateness of countermeasures for tackling online harms in recommender systems.

  • Applications and case studies of misinformation- and harm-aware recommender systems.

  • Mitigation strategies against coronavirus-fueled hate speech and COVID-related misinformation propagation.

  • Ethical and social implications of monitoring, tackling and moderating online harms.

  • Online harm engagement, propagation and attacks in recommender systems.

  • Privacy preserving recommender systems.

  • Attack prevention in collaborative filtering recommender systems

  • Quantitative user studies exploring the effects of harm recommendations.


We encourage works focused on mitigating online harms in domains beyond social media, such as effects in collaborative filtering settings, e-commerce platforms, news-media, video platforms (e.g.YouTubeorVimeo) or opinion-mining applications, among other possibilities. Works specifically analyzing any of the previous topics in the context of the COVID-19 crisis are also welcome, as well as works based on social networks other than Twitter and Facebook, such as Tik-Tok, Reddit, Snapchat and Instagram.