Invited Speakers

What do we need to effectively measure computational harms?

Dr. Alexandra Olteanu is a computational social science and social computing researcher. Currently, she is a Principal Researcher in the Fairness, Accountability, Transparency and Ethics (FATE) Group. Prior to joining the FATE group, she was a Social Good Fellow at the IBM T.J. Watson Research Center, NY. She is interested in how data biases and methodological limitations delimit what we can learn from online social traces, and how we can make the systems that leverage such data safer, fairer, and generally less biased. The problems she tackles are often motivated by existing societal challenges such as hate speech, racial discrimination, climate change, and disaster relief.

Detecting online harmful information: fake news, conspiracy theories, and misogyny

Dr. Paolo Rosso is a full professor at the Universitat Politècnica de València, Spain where he is also a member of the PRHLT research center. His research interests focus mainly on author profiling, irony detection, opinion spam detection, and plagiarism detection. Since 2009 he has been involved in the organization of PAN benchmark activities at CLEF and at FIRE evaluation forums, mainly on plagiarism/text reuse detection and author profiling. At SemEval he has been co-organiser of shared tasks on sentiment analysis of figurative language in Twitter (2015), and on multilingual detection of hate speech against immigrants and women in Twitter (2019).

The ease of generating content online and the anonymity that social media provide have increased the amount of harmful content that is published. Fake news, conspiracy theories, and offensive content are published and propagated on daily basis. In this keynote I will describe how fake news, and conspiracy theories, can be detected going beyond just considering textual information: emotions, psycholinguistics characteristics, and multimodal information play an important role and should be in the loop. At the end of my talk I will also briefly mention the problem of misogyny and the Multimodal Automatic Misogyny Identification (MAMI) shared task that will be organised at SemEval 2022.