2020-06-05: Augmented Human Online Trip Report


The 11th Augmented Human International Conference was held on May 27th, and 28th May online. The Augmented Human conference series focuses on scientific contributions on technology for well-being and experience by augmenting human capabilities. The conference series has served as a forum to present and exchange ideas for augmenting human capabilities for 10 years. The conference included keynote speeches, demo presentations, poster presentations, and research presentations. The conference was conducted under 4 main research tracks: Neurosciences, Biomechanics, Technology for healthcare, and smartphones and applications.

11th Augmented Human International Conference


Day 1 (May 27)

Day 1 of the conference started with a Keynote by Professor Rory Cooper from the Human Engineering Research Laboratories, University of Pittsburg. The topic of the keynote was “Advancing technologies for people with disabilities”, where he presented a smart wheelchair design integrating different aspects or requirements of the user. Some of the aspects that were included for the design included aspects of mobility, intelligent bed technology for correct seating and weight distribution, and arm control which allows the user to operate a robotic arm integrated to the wheelchair.



The first session of the author series was on neurosciences where research presentations mainly centered on aspects of neurophysiological behavior of humans. The research presented either focused on extracting them through organs or providing input to the nervous system through external mechanisms. Bradley Rey from HCI Lab of the University of Manitoba presented “Eye-Free Graph Legibility” which focused on providing tactile graph visualizations of the graphs. The research used a skin dragging technique through which they were able to provide longer tactile perceptions required for visualization.



During the 1st session, we presented our paper “Gaze-Net: Appearance Based Gaze Estimation using Capsule Networks”. The research explored the applicability of estimating gaze using capsule networks entirely based on Ocular images. Research also explored the practical aspects of the proposed gaze estimation methodology such as personalization, and transfer learning.

Another interesting presentation during the session was on “Tracing Shapes with Eyes” presented by Mohammad Rakib Hasan from The University of Saskatchewan. The research was concentrated on the applicability of gaze tracking for drawing continuous lines. The results indicated that the users were able to trace the given shapes reasonably well by utilizing gaze as input.


The second session was focused on research on Biomechanics, motor control, and evaluation. The rst presentation of the session was by Dr. Toshiyuki Hagiya from Toyota Motor Corporation on “Acceptability Evaluation of Inter-driver Interaction System via a Driving Agent Using Vehicle-to-vehicle Communication”. Here an agent tries to understand the driver’s verbal expressions and sends messages to other nearby drivers using vehicular networks. The research aims to reduce accidents by eliminating misunderstandings that could arise between drivers.


Another interesting presentation during the session was a poster presentation by Dr. Mark- Jean Seigneur from the University of Geneva on "Body Chain: Using Blockchain to Reach Augmented Body Health State Consensus". The study concentrates on overcoming possible compromise on body state due to attacks on human implants which are distributed in different locations of the body. The study proposes how to achieve health consensus by utilizing distributed ledger technologies.


Day 2 (May 28)

Day 2 of the conference commenced with the Keynote on “Seamless User Experience for IoT” by Dr. Wei Li, Director of Human-machine Interaction Lab of Huawei Canada. During the presentation, he highlighted some of the key challenges in IoT for user experience such as identification of different types of devices present in user surroundings. During the presentation, he proposed how these devices can be classified depending on the proximity to the user and efficient multi-modal sensing can be implemented to improve the user experience.


A key highlight of the study was how Huawei has utilized some of these technologies for social good. Here, they have used eye-tracking for detecting visual impairments which are quite similar to the experiments we perform at ODU regarding eye-tracking for ADHD and PTSD.



The first session was on technology to support healthcare and well-being. Among interesting presentations during the session, the presentation on “xClothes” by Dr. Haoran Xie from Japan Advanced Institute of Science and Technology. The research concentrates on how retractable structures can be utilized for improving the wearer's comfort by either opening or closing the retractable structure depending on the humidity level. Through the experiments, authors have verified that the capability proposed system to improve the comfortability of the wearer.


The final session of the conference was on smartphones and applications. The following were some of the highlights from the session.

Caring4Dementia: a mobile application to train people caring for patients with dementia – A mobile application for training people for interacting with patients with dementia, presented by Anna Polyvyana, University of Manitoba. Project: https://tactilerobotics.ca/caring4dementia/

User Gesture Elicitation of Common Smartphone Tasks for Hand Proximate User Interfaces – Explores the concepts of hand proximate user interfaces. Presented by Ahmed Shariff,
University of Manitoba. Project: http://hci.cs.umanitoba.ca/publications/details/user-gesture- elicitation-of-common-smartphone-tasks-for-hand-proximate-user

The conference concluded with the award ceremony.

Best Paper: Shell Shaped Smart Clothes for Non-verbal Communication, Masato Sekine, Naoya Watabe, Miki Yamamura, and Hiroko Uchiyama from Joshibi University of Art and Design, Tokyo, Japan.

Other Resources

Our Paper
Gaze-Net: appearance-based gaze estimation using capsule networks, Bhanuka Mahanama, Yasith Jayawardana, Sampath Jayarathna

TL;DR: Gaze estimation using Ocular images by using a capsule network with different regularization schemes. Personalize model using transfer learning for a smaller dataset and compare performance with and without retraining. How the findings can be applied in a practical scenario to exploit the advantages of both data-driven and event-driven approaches of training the model.

Slides:




Live demo: https://mgaze.nirds.cs.odu.edu/gazenet-browser 

Project page: https://mgaze.nirds.cs.odu.edu/

Twitter Thread for the conference: [Thanks Yasith Jayawardana (@yasithmilinda), Gavindya
Jayawardena (@Gavindya2)]


12th Augmented Human international conference will be held in Geneva, Switzerland. Conference Website: https://www.augmented-human.com/

Photos: [Thanks Yasith Jayawardana (@yasithmilinda), Gavindya Jayawardena (@Gavindya2), Bathsheba Farrow (@sheissheba)] https://photos.app.goo.gl/qg4tajCvxxE8LLY78


-- Bhanuka Mahanama (@mahanama94)

Comments