2021-10-08: Role of Cybersecurity on Curbing the Spread of Disinformation and Misinformation and Disinformation -- Trip Report to the CCI Workshop and More
Earlier in 2021, the Commonwealth Cyber Initiative (CCI) announced a call for proposals themed Role of Cybersecurity on Curbing the Spread of Disinformation and Misinformation and Disinformation. The theme was the role of cybersecurity on curbing the spread of disinformation and misinformation. Traditionally, cybersecurity was about solutions of protecting computers and information systems from information disclosure, damage of hardware, software, and electronic data. Recently, many researchers realized that combating misinformation and disinformation (MIDIS) should be incorporated into the scope because MIDIS is also the delivery package of malicious attackers, whether intentional or unintentional. Such information can cause subsequent damage to hardware and software, people's health, and even the society.
Several papers have revealed the ever-growing phenomenon of MIDIS and characterized its impact. One paper published by Scheufele & Krause (2018) found that the occurrences of "fake news" in newspaper coverage in the United States and globally increased dramatically since around 2002-2003. This coincides with the period when SARS spread. Since the inception of mobile devices (the iPhone was born in 2007), fake news spread even faster. Since then, there has been lots of research going on this topic, from computer science to psychology to sociology. The call aimed at funding joint efforts from multi-disciplinary teams in the CCI network to conduct research on how cybersecurity and artificial intelligence tools and concepts may help to limit, deter, or stop the creation and spread of disinformation and misinformation.
This is a 1-year seed grant, which means that the funding agency expects awardees to apply for larger grants from federal funding agencies (e.g., NSF). The competitors are basically universities within Virginia. We were told there were 14 proposals submitted. Three proposals were awarded in the first announcement and four proposals were put the waiting list, which were finally awarded later when additional funds became available. Our proposal titled "The Acceptance and Effectiveness of Explainable AI-Powered Credibility Labeler for Scientific Misinformation and Disinformation" were fortunately awarded. The PI is Dr. Jian Wu, assistant professor of Computer Science. The Co-PIs are Dr. Jeremiah Still, associate professor of Psychology, and Dr. Jiang Li, professor of Electrical and Computing Engineering (ECE). Three students are involved. They are
- Md Reshad Hoque – senior graduate student of ECE, responsible for algorithmic research and implementation
- Morgan Edwards – master’s student in Psychology, responsible for user interface and experimental design and analysis
- Winston Shields – master’s student in CS, web-based user interface implementation
- Malicious intent recognition tools for social cybersecurity to counter disinformation narratives. The presenter was Dr. Hemant Purohit (GMU)
- Analysis of misinformation and disinformation efforts from mass media and social media in creating anti-US perceptions. The presenters were Dr. Hamdi Kavak (GMU) and Dr. Saltuk Karahan (ODU)
- The Acceptance and Effectiveness of Explainable AI-Powered Credibility Labeler for Scientific Misinformation and Disinformation. The presenter was Dr. Jian Wu (ODU)
- Investigate question-under-discussion (QuD) framework to analyze social network communication. The presenters were Dr. Sachin Shetty (ODU), Dr. Teresa Kissel (ODU), and their students.
- Disinformation detection systems in autonomous vehicles. The presenters were Dr. Michael Gorman (UVA)
- Explore the impact of human-AI collaboration on open source intelligence. The presenter was Dr. Kurt Luther (VT)
Comments
Post a Comment