2024-01-23: International Conference on Intelligent User Interfaces (IUI) 2023 - Sydney, New South Wales (NSW), Australia Trip Report


The 28th annual Intelligent User Interfaces (IUI) conference, held at the University of Technology Sydney in Sydney, Australia, stands as one of the most notable and major conferences in the realm of Human-Computer Interaction (HCI). I am really fortunate that our paper, "AutoDesc: Facilitating Convenient Perusal of Web Data Items for Blind Users," was accepted at this important conference. The presentation of our work on March 28, 2023, gave me an opportunity to interact with some of the field's most well-known authors and experts. 


 

Workshops:

SOCIALIZE: Social and Cultural Integration with Personalized Interfaces
Even though there were different workshops to choose from, I chose the SOCIALIZE Workshop because it focused on using interactive techniques to promote social and cultural inclusion for diverse user groups. It seeks research that considers interaction nuances across various realities, particularly focusing on disadvantaged groups like refugees, migrants, children, the elderly, autistic individuals, and those with disabilities. Additionally, the workshop explores human-robot interaction techniques, concentrating on social robots' development to engage in socially effective behaviors and rules, enhancing their collaborative role with people.

Some of the interesting papers are listed below:

Patricia K. Kahr from Eindhoven University of Technology presented research on trust development in human-AI interactions. The study focused on how trust evolves over time when individuals receive AI advice in legal decision-making tasks. In a between-subject experiment, they tested the impact of model accuracy (high vs. low) and explanation type (human-like vs. not) on trust. Results revealed higher trust for high-accuracy models, with subjective trust even increasing over time. Human-like explanations didn't significantly impact trust overall but boosted trust in high-accuracy models.


 

Anand Aiyer from Stony Brook University introduced TASER, a browser extension designed to enhance accessibility forum navigation for visually impaired users. TASER utilizes an advanced disentanglement algorithm to identify and separate sub-conversations within forum threads, presenting them through a customized interface for efficient screen-reader interaction. In a study involving 11 screen-reader users, TASER significantly reduced user input actions and interaction times, along with decreasing cognitive load compared to conventional forum navigation, improving information foraging on accessibility forums.

Yi He from IMSUT, The University of Tokyo presented an interactive human-in-the-loop system for guiding deep neural networks' attention in image classification tasks. Their approach aims to mitigate dataset bias by enabling users to direct classifier attention to specific regions in images, enhancing model interpretability and transferability. Unlike prior methods requiring pixel-level annotations, their system allows simple user clicks for image annotation and employs an active learning strategy to reduce annotations significantly. Through numerical evaluations and a user study, the system demonstrated efficiency and reliability, outperforming traditional non-active-learning approaches while saving labor and costs in fine-tuning biased datasets for DNNs.

Changkun Ou from LMU Media Informatics Group presented a study on human-in-the-loop optimization, examining how different levels of expertise impact the quality of outcomes and user satisfaction in text, photo, and 3D mesh optimization tasks. They found that while novices achieved expert-level performance, experts tended to pursue more diverse outcomes, prolonging optimization iterations but leading to lower subjective satisfaction. Novices were more easily satisfied and terminated optimization faster. These findings highlight the need for designers to consider user expertise when developing human-in-the-loop systems and suggest leveraging expert behavior as an indicator for system improvement.
Diane H. Dallal from Penn Computer and Information Science presented a study on enhancing fairness in adaptive social exergames using Shapley Bandits. Addressing algorithmic fairness in AI-driven resource allocation, the research focuses on the social exergame "Step Heroes," aiming to promote fairness among user groups pursuing a shared goal. They identify drawbacks in traditional multi-armed bandits (MABs) and introduce the Greedy Bandit Problem, proposing Shapley Bandits as a fairness-aware solution. By prioritizing overall player participation and intervention adherence over favoring high-performing individuals, Shapley Bandits effectively mediated the Greedy Bandit Problem, as evidenced by improved user retention and motivation in a study involving 46 participants.
 K. J. Kevin Feng from the University of Washington presented a study addressing challenges faced by UX practitioners in designing interfaces for machine learning (ML) applications. The research involved a task-based design study with 27 UX practitioners, exploring the application of interactive ML paradigms in their workflows. By allowing direct experimentation with ML models, participants were able to better align ML capabilities with user goals, design more user-friendly ML interactions, and identify ethical considerations. The study highlights the potential of interactive ML in enhancing UX design but also acknowledges limitations, proposing research-informed machine teaching as a complement to future design tools in this domain.

Zelun Tony Zhang from the Research institute of the Free State of Bavaria for software-intensive systems presented a study focusing on intelligent decision support tools (DSTs) for aviation diversions. By interviewing professional pilots and using low-fidelity prototypes, the research aimed to understand how DSTs could benefit those with intricate knowledge of diversions. Results showed that while pilots wouldn't blindly trust DSTs, they also rejected deliberate trust calibration during decision-making. The study revisited the concept of appropriation to explain this contradiction and suggested means to enable appropriation, emphasizing transparency, detectability, and continuous support throughout the decision process. The research advocates expanding DST design beyond trust calibration at the moment of decision-making.
Patrick Hemmer from Karlsruher Instituts für Technologie (KIT) presented a study on appropriate reliance on AI advice, crucial in decision-making contexts like investments and medical treatments. Addressing the lack of a common definition and measurement concept, they proposed the "Appropriateness of Reliance" (AoR) as a quantifiable two-dimensional measurement. Their research model examined the impact of providing explanations for AI advice through an experiment involving 200 participants. The study demonstrated how these explanations influenced AoR and the effectiveness of AI advice, offering fundamental insights for analyzing reliance behavior and designing purposeful AI advisors.

Sumit Srivastava from the University of Twente presented a study on the impact of lexical alignment in human understanding of explanations by conversational agents within Explainable Artificial Intelligence (XAI). The research explored natural language personalization through online interactions with conversational agents, focusing on recall and comprehension of explanations. Participants engaging with an aligning agent showed significantly better information recall and understanding compared to non-aligning agent interactions or no dialogue conditions. The study's results indicate a positive influence of lexical alignment on enhancing human understanding of explanations delivered by conversational agents.

10) 'The Programmer's Assistant: Conversational Interaction with a Large Language Model for Software Development'
Stephanie Houd from IBM Research presented "The Programmer's Assistant" exploring conversational interactions with Large Language Models (LLMs) in software development. Their prototype system aimed to leverage contextual information from users' previous interactions and code context to enhance the LLM's responses. Involving 42 participants with varying programming expertise, the evaluation revealed the system's capacity for extended discussions, uncovering additional knowledge beyond code generation. Despite initial skepticism, participants were impressed by the assistant's breadth of capabilities, response quality, and potential to boost productivity. This work underscores the unique potential of conversational interactions with LLMs for collaborative software development processes. 
In conclusion, attending my first in-person conference was an unforgettable experience that allowed me to combine academic knowledge with practical insights from real-world experts. Conversations, both academic and casual, helped in gain a deeper understanding of the many interconnections between AI, user experience, and software development. Between sessions, exploring Sydney's vibrant city was a fun break that helped create memories that will last beyond the conference. Not only did this conference enhance my career opportunities, but it also made me excited to learn more about many new and exciting areas and make a difference in them in the future. IUI 2024 will be held in Greenville, South Carolina, USA.

- Mohan Krishna Sunkara (@mk344567)

Comments