Universitat Politècnica de València, Spain
Google Scholar: https://scholar.google.es/citations?user=HFKXPH8AAAAJ&hl=en
Title: On the detection of fake news, conspiracy theories, and other harmful information
Abstract: The rise of social media has offered a fast and easy way for the propagation of fake news and conspiracy theories. Despite the research attention that has received, fake news detection remains an open problem and users keep sharing texts that contain false statements. In this keynote I will describe how to go beyond textual information to detect fake news, taking into account also affective and visual information because providing important insights on how fake news spreaders aim at triggering certain emotions in the readers. I will also describe how psycholinguistic patterns and users' personality traits may play an important role in discriminating fake news spreaders from fact checkers. Finally, I will comment on some studies on the propagation of conspiracy theories. The ongoing work done on detection of disinformation, from fake news to conspiracy theories, is in the framework of IBERIFIER, the Iberian media research & fact-checking hub on disinformation funded by the European Digital Media Observatory (2020-EU-IA-0252), and the XAI-DisInfodemics project on eXplainable AI for disinformation and conspiracy detection during infodemics funded by the Spanish Ministry of Science and Innovation (PLEC2021-007681). In the final part of the keynote I will briefly address also the other side of harmful information in social media, hate speech, making emphasis on the case of misogynous memes.
Bio-sketch: Paolo Rosso is Full Professor at the Universitat Politècnica de València, where he is also a member of the Pattern Recognition and Human Language Technology (PRHLT) research center. His research interests are focused on social media data analysis, mainly on fake news and hate speech detection, author profiling, and sarcasm detection. He has published 50+ articles in journals (34 Q1) and 400+ articles in conferences and workshops, he has an H-index of 68 (source: Google Scholar) and he is in the ranking of the top H-index scientists in Spain (http://www.guide2research.com/scientists/ES). He has been PI of several national and international research projects funded by EC, U.S. Army Research Office, Qatar National Research Fund, and Vodafone Spain. Currently, he is the PI of the research project XAI-DisInfodemics on eXplainable AI for disinformation and conspiracy detection during infodemics (Spanish Ministry of Science and Innovation), a member of the EC IBERIFIER project on Monitoring the threats of disinformation (European Digital Media Observatory), the project on Resources and Applications for Detecting and Classifying Polarized Hate Speech in Arabic Social Media (Qatar National Research Fund), and the recent FairTransNLP project on Fairness and Transparency for equitable NLP applications in social media (Spanish Ministry of Science and Innovation). He has been advisor of 26 PhD theses and currently he is the advisor of 8 PhD students. Paolo Rosso gave several keynotes (TSD-2020, CICLing-2019 etc.) and has helped organising 30+ shared tasks at the PAN Lab at CLEF and FIRE evaluation forums, SemEval, IberLEF and Evalita on topics such as author profiling (e.g. profiling bots, haters, and fake news spreaders), hate speech detection, irony detection, misogyny, sexism and toxic language identification. He helped as senior chair or track chair in conferences such as SIGIR, ACL, COLING, EMNLP, just to name a few. Since 2014 he is Deputy Steering Committee Chair of the CLEF Association.
Information Technologies Institute, CERTH, Greece
Google Scholar: https://scholar.google.com/citations?user=Nr7smP8AAAAJ&hl=en
Title: Content, Context and Network-based Approaches for Fighting Disinformation
Abstract: Recent developments and events of worldwide significance such as the COVID-19 pandemic and the war in Ukraine have made clear that online disinformation is a long-lasting challenge of immense scale and complexity. Focusing on visual disinformation that can appear in many forms, including manipulated photos/video, deepfakes, visuals out of context and false connections, a variety of approaches and tools are needed in order to address this challenge. In this talk, I will be presenting our lab’s efforts in this area, across three main directions: approaches which take into account content, context and network-based information. It will include media forensics, deepfake detection and reverse image and video search approaches together with tools already used by journalists and fact-checkers. Key challenges and additional aspects such as actual operational settings, human behaviour and policy issues will also be covered.
Biosketch: Dr. Ioannis (Yiannis) Kompatsiaris is the Director of CERTH-ITI and the Head of Multimedia Knowledge and Social Media Analytics Laboratory. His research interests include ΑΙ/ML for Multimedia, Semantics (multimedia ontologies and reasoning), Social Media and Big Data Analytics, Multimodal and Sensors Data Analysis, Human Computer Interfaces, e- Health, Cultural, Media/Journalism and Security applications. He is the co-author of 178 papers in refereed journals, 63 book chapters, 8 patents and 560 papers in international conferences. Since 2001, Dr. Kompatsiaris has participated in 88 National and European research programs, in 31 of which he has been the Project Coordinator. He has also been the PI in 15 contracts from the industry. He has been the co-chair of various international conferences and workshops including the 13th IEEE Image, Video, and Multidimensional Signal Processing (IVMSP 2018) Workshop and has served as a regular reviewer, associate and guest editor for a number of journals and conferences currently being an associate editor of IEEE Transactions on Image Processing. He is a member of the National Ethics and Technoethics Committee, the Scientific Advisory Board of the CHIST-ERA funding programme and an elected member of the IEEE Image, Video and Multidimensional Signal Processing – Technical Committee (IVMSP – TC). He is a Senior Member of IEEE and ACM. Since January 2014, he is a co-founder of the Infalia private company, a high-tech SME focusing on data intensive web services and applications.
Bielefeld University, Germany
Google Scholar: https://scholar.google.es/citations?hl=en&user=1d3OxaUAAAAJ
Title: Trustworthy AI - Attacks, explanations, and lifelong learning
Abstract: The increasing availability of smart products and AI components in everyday life such as speech assistance or image recognition tools has also led to an increase of peculiar outputs and errors of AI models. Examples include adversarial attacks (i.e., misclassifications of AI models which are surprising for humans), biases of models (i.e., models which treat some subgroups differently as compared to others), or functional failures (i.e., models are no longer working as required in realistic scenarios).
After a glimpse at some spectacular AI failures, I will address three approaches which have been proposed in this context:
I) adversarial attacks: what makes an attack adversarial and how can models be efficiently be 'robustified'?
II) explanations of AI models: we focus on efficient and plausible counterfactual explanations and have a glimpse at how to evaluate their effectiveness,
III) lifelong learning: how can humans teach machines and AI models be continuously adapted based on human feedback?
Biosketch: Barbara Hammer is a full Professor for Machine Learning at the CITEC Cluster at Bielefeld University, Germany. She received her Ph.D. in Computer Science in 1999 and her venia legendi (permission to teach) in 2003, both from the University of Osnabrueck, Germany, where she was head of an independent research group on the topic 'Learning with Neural Methods on Structured Data'. In 2004, she accepted an offer for a professorship at Clausthal University of Technology, Germany, before moving to Bielefeld in 2010. Barbara's research interests cover theory and algorithms in machine learning and neural networks and their application for technical systems and the life sciences, including explainability, learning with drift, nonlinear dimensionality reduction, recursive models, and learning with non-standard data. Barbara has been chairing the IEEE CIS Technical Committee on Data Mining and Big Data Analytics, the IEEE CIS Technical Committee on Neural Networks, and the IEEE CIS Distinguished Lecturer Committee. She has been elected as member of the IEEE CIS Administrative Committee and the INNS Board. She is an associate editor of the IEEE Computational Intelligence Magazine, the IEEE TNNLS, and IEEE TPAMI. Currently, large parties of her work focusses on explainable machine learning for spatial-temporal data in her role as a PI of the ERC Synergy Grant Water-Futures.