2020-06-05: Augmented Human Online Trip Report
The 11th Augmented Human International Conference was held on May 27th, and 28th May online. The Augmented Human conference series focuses on scientific contributions on technology for well-being and experience by augmenting human capabilities. The conference series has served as a forum to present and exchange ideas for augmenting human capabilities for 10 years. The conference included keynote speeches, demo presentations, poster presentations, and research presentations. The conference was conducted under 4 main research tracks: Neurosciences, Biomechanics, Technology for healthcare, and smartphones and applications.
11th Augmented Human International Conference |
Happy to announce that @Huawei is the main sponsor of the 11th Augmented Human International Conference, which has just started online with Prof. Amine Choukou from the University of Manitoba. Joining via https://t.co/LNCEzs65MO @umanitoba #augmented #human #augmentation pic.twitter.com/LfNpilasrC— Augmented Human (@augmented_human) May 27, 2020
Day 1 (May 27)
Day 1 of the conference started with a Keynote by Professor Rory Cooper from the Human Engineering Research Laboratories, University of Pittsburg. The topic of the keynote was “Advancing technologies for people with disabilities”, where he presented a smart wheelchair design integrating different aspects or requirements of the user. Some of the aspects that were included for the design included aspects of mobility, intelligent bed technology for correct seating and weight distribution, and arm control which allows the user to operate a robotic arm integrated to the wheelchair.
Today's session started with keynote from Dr. Rory Cooper from @PittTweet @herlpitt on Advancing technologies for people with disabilities - Smart wheelchair design #AH2020 #augmented #human #augmentation pic.twitter.com/USeRm51RMv— Bhanuka Mahanama (@mahanama94) May 27, 2020
The first session of the author series was on neurosciences where research presentations mainly centered on aspects of neurophysiological behavior of humans. The research presented either focused on extracting them through organs or providing input to the nervous system through external mechanisms. Bradley Rey from HCI Lab of the University of Manitoba presented “Eye-Free Graph Legibility” which focused on providing tactile graph visualizations of the graphs. The research used a skin dragging technique through which they were able to provide longer tactile perceptions required for visualization.
Bradley Rey from HCI Lab of @umanitoba now presenting "Eye-Free Graph Legibility: Using Skin-Dragging to Provide a Tactile Graph Visualization on the Arm" #AH2020 pic.twitter.com/70Ybic1Oxx— Gavindya (@Gavindya2) May 27, 2020
During the 1st session, we presented our paper “Gaze-Net: Appearance Based Gaze Estimation using Capsule Networks”. The research explored the applicability of estimating gaze using capsule networks entirely based on Ocular images. Research also explored the practical aspects of the proposed gaze estimation methodology such as personalization, and transfer learning.
Another interesting presentation during the session was on “Tracing Shapes with Eyes” presented by Mohammad Rakib Hasan from The University of Saskatchewan. The research was concentrated on the applicability of gaze tracking for drawing continuous lines. The results indicated that the users were able to trace the given shapes reasonably well by utilizing gaze as input.Today I presented "Gaze-Net: Appearance-Based Gaze Estimation Using Capsule Networks", first presentation representing @oducs @WebSciDL @NirdsLab— Bhanuka Mahanama (@mahanama94) May 27, 2020
Live demo: https://t.co/3PY6DanM2z
Thanks to @OpenMaze @yasithmilinda @Gavindya2 https://t.co/it4Bs7GyiT
Mohammad Rakib Hasan from @usask presented Tracing Shapes with Eyes using Eye-tracking pic.twitter.com/U7jrszGjgy— Bhanuka Mahanama (@mahanama94) May 27, 2020
The second session was focused on research on Biomechanics, motor control, and evaluation. The rst presentation of the session was by Dr. Toshiyuki Hagiya from Toyota Motor Corporation on “Acceptability Evaluation of Inter-driver Interaction System via a Driving Agent Using Vehicle-to-vehicle Communication”. Here an agent tries to understand the driver’s verbal expressions and sends messages to other nearby drivers using vehicular networks. The research aims to reduce accidents by eliminating misunderstandings that could arise between drivers.
2nd Author session just started with Toshiyuki Hagiya from @ToyotaMotorCorp presenting "Acceptability Evaluation of Inter-driver Interaction System" through vehicular agents. Methodology could improve Driver-driver communication via v2v networks. pic.twitter.com/TE3ZLDEILA— Bhanuka Mahanama (@mahanama94) May 27, 2020
Another interesting presentation during the session was a poster presentation by Dr. Mark- Jean Seigneur from the University of Geneva on "Body Chain: Using Blockchain to Reach Augmented Body Health State Consensus". The study concentrates on overcoming possible compromise on body state due to attacks on human implants which are distributed in different locations of the body. The study proposes how to achieve health consensus by utilizing distributed ledger technologies.
Dr. Jean-Marc Seigneur from @unige_en now presenting "Body Chain" on maintaining consensus on state of the body using Blockchain. Security is critical for augmentation. #AH2020 #augmented #human pic.twitter.com/9KLqMgZn0K— Bhanuka Mahanama (@mahanama94) May 27, 2020
Day 2 (May 28)
Day 2 of the conference commenced with the Keynote on “Seamless User Experience for IoT” by Dr. Wei Li, Director of Human-machine Interaction Lab of Huawei Canada. During the presentation, he highlighted some of the key challenges in IoT for user experience such as identification of different types of devices present in user surroundings. During the presentation, he proposed how these devices can be classified depending on the proximity to the user and efficient multi-modal sensing can be implemented to improve the user experience.
Second day of @augmented_human started with keynote from Dr. Wei Li from HMI Lab @huawei Canada, highlighting challenges in IoT, User experience, Multi-modal sensing, and Human Augmentation #AH2020 #augmented #human pic.twitter.com/CM7GMrBwtV— Bhanuka Mahanama (@mahanama94) May 28, 2020
A key highlight of the study was how Huawei has utilized some of these technologies for social good. Here, they have used eye-tracking for detecting visual impairments which are quite similar to the experiments we perform at ODU regarding eye-tracking for ADHD and PTSD.
Some projects with eye-tracking, and gaze detection pic.twitter.com/mfVTovfAAW— Bhanuka Mahanama (@mahanama94) May 28, 2020
The first session was on technology to support healthcare and well-being. Among interesting presentations during the session, the presentation on “xClothes” by Dr. Haoran Xie from Japan Advanced Institute of Science and Technology. The research concentrates on how retractable structures can be utilized for improving the wearer's comfort by either opening or closing the retractable structure depending on the humidity level. Through the experiments, authors have verified that the capability proposed system to improve the comfortability of the wearer.
Dr. Haoran Xie from JAIST presented xClothes— Bhanuka Mahanama (@mahanama94) May 28, 2020
Project link: https://t.co/y1WBQOrckn pic.twitter.com/eu8A24h5yE
The final session of the conference was on smartphones and applications. The following were some of the highlights from the session.
Caring4Dementia: a mobile application to train people caring for patients with dementia – A mobile application for training people for interacting with patients with dementia, presented by Anna Polyvyana, University of Manitoba. Project: https://tactilerobotics.ca/caring4dementia/
User Gesture Elicitation of Common Smartphone Tasks for Hand Proximate User Interfaces – Explores the concepts of hand proximate user interfaces. Presented by Ahmed Shariff,
University of Manitoba. Project: http://hci.cs.umanitoba.ca/publications/details/user-gesture- elicitation-of-common-smartphone-tasks-for-hand-proximate-user
The conference concluded with the award ceremony.
Best Paper: Shell Shaped Smart Clothes for Non-verbal Communication, Masato Sekine, Naoya Watabe, Miki Yamamura, and Hiroko Uchiyama from Joshibi University of Art and Design, Tokyo, Japan.
Other Resources
Our Paper
Gaze-Net: appearance-based gaze estimation using capsule networks, Bhanuka Mahanama, Yasith Jayawardana, Sampath Jayarathna
TL;DR: Gaze estimation using Ocular images by using a capsule network with different regularization schemes. Personalize model using transfer learning for a smaller dataset and compare performance with and without retraining. How the findings can be applied in a practical scenario to exploit the advantages of both data-driven and event-driven approaches of training the model.
Slides:
Live demo: https://mgaze.nirds.cs.odu.edu/gazenet-browser
Project page: https://mgaze.nirds.cs.odu.edu/
Twitter Thread for the conference: [Thanks Yasith Jayawardana (@yasithmilinda), Gavindya
Jayawardena (@Gavindya2)]
11th Augmented Human International Conference @augmented_human just started with Prof. @AmineChoukou from the @umanitoba. #AH2020
— Bhanuka Mahanama (@mahanama94) May 27, 2020
12th Augmented Human international conference will be held in Geneva, Switzerland. Conference Website: https://www.augmented-human.com/
Photos: [Thanks Yasith Jayawardana (@yasithmilinda), Gavindya Jayawardena (@Gavindya2), Bathsheba Farrow (@sheissheba)] https://photos.app.goo.gl/qg4tajCvxxE8LLY78
-- Bhanuka Mahanama (@mahanama94)
Comments
Post a Comment