2024-09-04: Trust and Influence Program Review Meeting 2024 Trip Report


Dr. Erika Frydenlund, Melissa Miller-Felton and I attended the Air Force Office of Scientific Research (@AFOSR) 2024 Trust and Influence Program Review on August 12-16, 2024 in Dayton, Ohio. We were invited because of our Minerva Research Initiative Grant, “What's Missing? Innovating Interdisciplinary Methods for Hard-to-Reach Environmentsawarded by the U.S. Department of Defense in 2022 (described in a previous blog post by Dr. Erika Frydenlund when this grant was first awarded). Although the research conducted is public, this meeting was not recorded.  

On the first day, the afternoon keynote was given by Dr. Samuel Segun, senior researcher at Global Center on AI Governance. He spoke on the topic “Building responsible & trustworthy AI: Operationalizing Responsible Artificial Intelligence (RAI) Practice,” where he underscored the critical need for trust in AI systems. Dr. Segun motivated the importance of trusting AI systems by asking, 'Who is responsible?' in an aircraft accident, by citing the 2 Boeing incidents (Ethiopian Airlines Flight 302 in 2019 and Lion Air Flight 610 in 2018).


He discussed AI trust, ethics, challenges, technical foundations, regulations, and future directions, centering his talk around the “Global Index on Responsible AI” project, which highlights how the global advancement of responsible AI is lagging far behind the rapid pace of AI development. Dr. Segun also highlighted real-world scenarios that complicate our understanding of responsibility and ethics in AI. He referenced Griefbots, which enable communication with the deceased, and self-driving cars, where determining accountability in the event of an accident remains unclear. Additionally, he discussed a maternity support chatbot that was developed to improve maternal well-being among pregnant women, but it did not quite achieve the expected outcomes. 


Conversations with individuals well-versed in pregnancy and women's health helped me realize why a chatbot for pregnant women might not be the most effective solution as every case is unique; they often need advice that is specific to their cultural, personal, or medical context. However, it is likely we will see more of these chatbots in the future, as the trend towards using chatbots for a wide range of applications continues to grow. During the Q&A session, an audience member raised concerns about the ethics specific to AI model design, to which Dr. Segun responded that while general ethics are relatively easier to address, the specific ethical considerations for the AI system itself are complex and demand further scrutiny—a point that left the audience seeking more detailed discussion.



Dr. Frydenlund, Melissa, and I presented (in that order) the progress of our project before lunch the following day. We are a part of a large international group, each contributing unique methodologies to address a common goal: understanding how residents of informal settlements innovate to adapt to varying levels of safety and security. This analysis concentrates on two field sites: Khayelitsha Township in South Africa and Villa Caracas in Colombia. Dr. Frydenlund began by introducing the project and explaining how the teams are integrated. 



Our collaborators include:


  1. Old Dominion University, USA (our institution)


  1. Visual Sociology Team: Jennifer Fish


GRA: Melissa Miller-Felton


  1. Citizen Science Team: Erika Frydenlund  in collaboration with International Consultants


GRA: Guljannat Huseynli (Past Contributor)


  1. Web and Social Media Team: Michele C. Weigle and Michael L. Nelson (Web Science & Digital Libraries Research Group, @WebSciDL


GRAs: 

Himarsha R. Jayanetti

Kritika Garg (Past Contributor)

Caleb Bradford (Past Contributor)


  1. Referencer Software Team: Jose Padilla


GRAs: 

Jhon Botello 

Joseph Martinez


Software Developer: Anthony Barraco


* This is a collaborative platform, created by the Storymodelers team based on their experiences with past interdisciplinary projects. This software serves as the main repository for data in our project, is freely available, and is actively being refined in its beta version.



  1. Universidad del Norte, Colombia 


Survey Team: Katherine Palacio


GRAs:

Daniel Bolivar

Leidy Gonzalez

Valeria Silgado

Liss Romero

  1. University of Agder, Norway

    Institutional Ethnography Team: Hanne Haaland and Hege Wallevik


GRA: Jade Lee MacDonnell (University of Cape Town)


  1. York University, Canada


“Science of Teams” Team: Michaela Hynie


GRA: Michael Ruderman



Melissa then presented her findings from a “Visual Sociology” perspective, sharing insights about what they learned through photographs, interviews, and focus groups, with a particular focus on how safety, security, and infrastructure interact. She also addressed the challenges faced, such as the difficulties of conducting field visits due to safety concerns. 


I concluded by presenting the progress of our team so far, particularly focusing on the methods to investigate the social media and news footprint of residents in slums and informal settlements. Despite challenges such as limited access to social media APIs and difficulties with web crawling, we are making significant progress. We have also been exploring methods to identify first-hand social media accounts using location tags and user account metadata.



A few presentations that particularly caught my attention were on Human-Robot Interaction and building trust in human-machine teams. For instance, one study involved a robot making a sandwich while a human supervised, examining how the human’s facial expressions and eye movements changed in response. The audience was curious if such a study can be or would be performed for human-human interaction as a baseline. 


Another group of researchers is working on developing embedded and physiological metrics to understand and predict trust dynamics in complex human-autonomy teams, especially for Air Force and Space Force missions. The audience discussion highlighted future research opportunities, specifically how AI interacts with multiple teammates and manages complex interactions involving both multiple AIs and multiple humans. 


A group of researchers from the University of Iowa is studying vulnerabilities of users and platforms to strategic information operations. Their goal is to develop a taxonomy of user and platform vulnerabilities to foreign influence campaigns and propose practical strategies for mitigating these issues. One interesting aspect of their work is examining whether the meaning of "like" varies across different regions (like the US, Europe, and East Africa) and platforms (such as Reddit, X, and YouTube). This project aligns particularly well with my research interests and stood out to me among the presentations at the meeting. In a discussion with PI Brian Ekdale later on, I learned that their project not only includes an audit of platform algorithms but also features a phase dedicated to studying social media users. This aims to understand common engagement behaviors, explore the relationships between psychological or cultural variables and engagement patterns through surveys, and conduct interviews to identify which mitigation strategies are likely to be adopted.


Overall, the meeting showcased a broad spectrum of research on trust and ethics, encompassing AI, machine learning, robotics, military technology, biometric systems, algorithmic fairness, and cybersecurity, all examined through a social science lens on impact. Unlike technical settings, the focus of the presentations were less about the nitty-gritty of the technology and more on insights and broader implications. Essentially, the technical tasks were seen as the foundation, while the discussions elevated the research to a conceptual and impact-driven level.


Reflections


Being a computer science student stepping into the world of social science has been both exciting and challenging. This project has truly pushed me to adapt my skills to innovative research methods, revealing conditions in hard-to-reach environments through their digital footprints. Attending and presenting at the 2024 Trust and Influence Program Review Meeting was a remarkable experience that underscored the importance of collaboration and continuous learning. Sharing our work and gaining new insights from diverse sessions has strengthened my commitment to this interdisciplinary collaboration, and I’m eager to see where it leads next.


Acknowledgements

I would like to acknowledge Dr. Erika Frydenlund and Melissa Miller-Felton from the Storymodelers lab, as well as Dr. Michele C. Weigle and Dr. Michael L. Nelson from Web Science & Digital Libraries Research Group, for their valuable contributions and feedback on this blog post.


My travel experience

As a fun detour, I managed to squeeze in some time to explore Ohio. I had the opportunity to visit the National Museum of the US Air Force (the oldest and largest military aviation museum in the world) and Carillon Historical Park in Dayton, Ohio. At the Air Force Museum, I explored an impressive collection of aircrafts, experienced a VR visit to the moon, and enjoyed the F-22 simulator, which was my favorite. At Carillon Historical Park, I got to see the first practical Wright Flyer (1905 Wright Flyer III), which was pivotal in the history of aviation.


~ Himarsha R. Jayanetti (HimarshaJ)


Comments