PANACEA
Challenges

The term ‘infodemic’ coined by the WHO refers to misinformation during pandemics that can create panic, fragment social response, affect rates of transmission; encourage trade in untested treatments that put people’s lives in danger. The WHO and government agencies have to divert significant resources to combat infodemics. Their scale makes it essential to employ computational techniques for claim veracity assessment. However, existing approaches largely rely on supervised learning. Present accuracy levels fall short of that required for practical adoption as training data is small and performance tends to degrade significantly on claims/topics unseen during training: current practices are unsuitable for addressing the scale and complexity of the COVID-19 infodemic.

Project Aims

This project will research novel supervised/unsupervised methods for veracity assessment of claims unverified at the time of posting, by integrating information from multiple sources and building a knowledge network that enables cross verification. Key originating sources/agents will be identified through patterns of misinformation propagation and results will be presented via a novel visualisation interface for easy interpretation by users.

This high-level aim gives rise to the following objectives:

RO1. Collect COVID-19 related data from social media platforms and authoritative resources.

RO2. Develop automated methods to extract key information on COVID-19 from scientific publications and other relevant sources.

RO3. Develop novel unsupervised/supervised approaches for veracity assessment by incorporating evidence from external sources.

RO4. Analyse dynamic spreading-patterns of rumour in social media and identify the key sources/agents.

Applications

The proposed project will provide benefits relating to the understanding and management of information around COVID-19 by different stakeholders:

The public. The new veracity evaluation model will help the public evaluate the veracity of online content. Unreliable information about treatments, for instance, will be coupled to pertinent articles from reliable institutional and scientific sources.

Healthcare services. Real-time veracity assessment of misinformation about ways to prevent, self-diagnose and treat coronavirus could help protect public health, which would eventually protect healthcare services from collapsing.

Government Agencies. Misinformation may cause ethnic division, political tension, and unnecessary panic in pandemics. It is thus important to have an automated tool which flags up misinformation in real-time and suggests mitigation strategies to combat them through targeted information provision and competing spreading of facts.

Mainstream media organisations. The proposed claim veracity assessment tool and the visualisation interface displaying the related evidence will help mainstream media organisations to identify fake news and correct misinformation in a timely manner to provide high-quality journalism.

© 2023 Copyright: KCL/Warwick NLP Group and QUML Cognitive Science Research Group