2021-10-08: Role of Cybersecurity on Curbing the Spread of Disinformation and Misinformation and Disinformation -- Trip Report to the CCI Workshop and More


Earlier in 2021,  the Commonwealth Cyber Initiative (CCI) announced a call for proposals themed Role of Cybersecurity on Curbing the Spread of Disinformation and Misinformation and Disinformation. The theme was the role of cybersecurity on curbing the spread of disinformation and misinformation. Traditionally, cybersecurity was about solutions of protecting computers and information systems from information disclosure, damage of hardware, software, and electronic data. Recently,  many researchers realized that combating misinformation and disinformation (MIDIS) should be incorporated into the scope because MIDIS is also the delivery package of malicious attackers, whether intentional or unintentional. Such information can cause subsequent damage to hardware and software, people's health, and even the society. 

Figure 1: LexisNexis appearances of “fake news” in newspaper coverage in the United States and globally. Courtesy: Scheufele & Krause (2018)

Several papers have revealed the ever-growing phenomenon of MIDIS and characterized its impact. One paper published by Scheufele & Krause (2018) found that the occurrences of "fake news" in newspaper coverage in the United States and globally increased dramatically since around 2002-2003. This coincides with the period when SARS spread. Since the inception of mobile devices (the iPhone was born in 2007), fake news spread even faster. Since then, there has been lots of research going on this topic, from computer science to psychology to sociology. The call aimed at funding joint efforts from multi-disciplinary teams in the CCI network to conduct research on how cybersecurity and artificial intelligence tools and concepts may help to limit, deter, or stop the creation and spread of disinformation and misinformation. 

This is a 1-year seed grant, which means that the funding agency expects awardees to apply for larger grants from federal funding agencies (e.g., NSF). The competitors are basically universities within Virginia. We were told there were 14 proposals submitted. Three proposals were awarded in the first announcement and four proposals were put the waiting list, which were finally awarded later when additional funds became available. Our proposal titled "The Acceptance and Effectiveness of Explainable AI-Powered Credibility Labeler for Scientific Misinformation and Disinformation" were fortunately awarded. The PI is Dr. Jian Wu, assistant professor of Computer Science. The Co-PIs are Dr. Jeremiah Still, associate professor of Psychology, and Dr. Jiang Li, professor of Electrical and Computing Engineering (ECE). Three students are involved. They are 

  • Md Reshad Hoque – senior graduate student of ECE, responsible for algorithmic research and implementation
  • Morgan Edwards – master’s student in Psychology, responsible for user interface and experimental design and analysis
  • Winston Shields – master’s student in CS, web-based user interface implementation
In the CCI workshop at the University of Virginia, the PIs of awarded proposals were invited to present their proposal and progress. The seven awarded proposals are below. Proposal names were based on onsite notes and may not be 100% accurate. 
  1. Malicious intent recognition tools for social cybersecurity to counter disinformation narratives. The presenter was Dr. Hemant Purohit (GMU)
  2. Analysis of misinformation and disinformation efforts from mass media and social media in creating anti-US perceptions. The presenters were Dr. Hamdi Kavak (GMU) and Dr. Saltuk Karahan (ODU) 
  3. The Acceptance and Effectiveness of Explainable AI-Powered Credibility Labeler for Scientific Misinformation and Disinformation. The presenter was Dr. Jian Wu (ODU)
  4. Investigate question-under-discussion (QuD) framework to analyze social network communication. The presenters were Dr. Sachin Shetty (ODU), Dr. Teresa Kissel (ODU), and their students.
  5. Disinformation detection systems in autonomous vehicles. The presenters were Dr. Michael Gorman (UVA)
  6. Explore the impact of human-AI collaboration on open source intelligence. The presenter was Dr. Kurt Luther (VT)
The PI/Co-PI of one proposal could not participate in the workshop due to family issues. 

I presented our proposal and current status. Below is a synopsis of our proposal. 

Fake scientific news has been spreading across the internet over years, especially since COVID-19 started, such as “mosquitoes spread coronavirus.” It is easier for people to believe these pieces of news since they seem intuitively correct. Will they change their mind if they see the evidence against these news articles from scientific literature? Do they trust the credibility scores estimated by artificial intelligence algorithms? Will they still pass these news articles to their friends on Facebook? In this project titled “The Acceptance and Effectiveness of an Al-powered Credibility Labeler for Scientific Misinformation and Disinformation”, computer scientists will collaborate with psychologists at Old Dominion University to investigate the answers to these questions. The project consists of three steps. First, a new algorithm will be researched and implemented to estimate how likely a scientific news article reveals the truth. The algorithm will also provide evidence from scientific publications. Then, a website will be built to show users what is obtained in the first step. Finally, a study will be performed on news consumers to see their reactions when they are shown fake news articles and debunk information on the website we built. We hope our research can reveal the effectiveness of using scientific literature as weapons to debunk scientific misinformation and disinformation, eventually curb their spreading across society. 

A paper that demonstrates the improved performance was submitted recently. Our IRB is also approved by the university. Our next milestone is to get fake scientific news and scientific papers collected. We are also working together with psychological collaborators to improve our UI design.

I obtained two good questions from the audience. The first question was how to make the digital library repository up to date. The second question was how the system deals with controversial scientific news. 

For the first question, our system relies on online digital libraries such as arXiv and SemanticScholar to actively update their index, so new papers will show up in the search results. For the second question, the controversial characteristic will be reflected by the confidence score, which is between 0 and 1. Empirically, scores between 0.3 and 0.7 can be regarded as controversial news. 

The CCI workshop was hosted at the Link Lab at UVA. Jennifer West, Dean of Engineering and Applied Science at UVA, John Stankovic, Director of Link Lab, and Luiz DaSilva, Executive Director of CCI gave welcome speeches before sessions. At the end of this workshop, Luiz hosted a short session on discussing possible future activities, including setting up a dataset repository and publishing a book.

-- Jian Wu





Comments