Sunday, February 3, 2019

2019-02-02: Two Days in Hawaii - the 33rd AAAI Conference on Artificial Intelligence (AAAI-19)

The 33rd AAAI Conference on Artificial Intelligence, the 31st Innovative Applications of Artificial Intelligence Conference, and the 9th Symposium on Educational Advances in Artificial Intelligence were held at the Hilton Hawaiian Village, Honolulu, Hawaii. I have one paper accepted by IAAI 2019 on Cleaning Noisy and Heterogeneous Metadata for Record Linking across Scholarly Big Datasets, coauthored with Athar Sefid (my student at PSU), Jing Zhao (my mentee at PSU), Lu Liu (a graduate student who published a Nature Letter), Cornelia Caragea(my collaborator at UIC), Prasenjit Mitra, and C. Lee Giles
This year, AAAI receives the greatest number of submissions -- 7095 which doubles the submission in 2018. There are 18191 reviews collected and over 95% papers have 3 reviews. There are 1147 papers accepted, which takes 16.2% of all submissions. This is the lowest acceptance rate in history. There are in total 122 technical sessions, 460 oral presentations (15 min talk) and 687 posters (2 min flash). People from China submitted the largest number of papers and got the largest number of papers accepted (382, about 16%). US people got 264 papers submitted (21%). Isreal got the highest acceptance rate (24.4%). The topics on MVP (Machine learning, NLP, and computer vision) take over 61% of all submissions and 59% of all accepted. The total 3 submission increase are reasoning under uncertainty, applications, and humans and AI. The top 3 submission decrease are cognitive systems, computational sustainability, and human computation and crowdsourcing. Papers with supplementary got 5% of more acceptance rate (27%) than peoples without supplementary (12%). 
The IAAI 2019 are less competitive, with 118 submissions. The acceptance rate is 35%. There are 36 emerging applications (including ours), and 5 deployed applications. The deployed application awards are conferred to 5 papers:
  • Grading uncompilable programs by Rohit Takhar & Varun Aggarwal (Machine Learning India)
  • Automated Dispatch of Helpdesk Email Tickets by Atri Mandal et al. (IBM Research India)
  • Transforming Underwriting in the Life Insurance Industry by Marc Maier et al. (Massachusetts Mutual Life Insurance Company)
  • Large Scale Personalized Categorization of Financial Transactions by Christopher Lesner et al. (Intuit Inc.)
  • A Genetic Algorithm for Finding a Small and Diverse Set of RecentNews Stories on a Given Subject: How We Generate AAAI’s AI-Alert by Joshua Eckroth and Eric Schoen (i2kconnect)
The Robert S. Engelmore Memorial Award was conferred to Milind Tambe (USC) for outstanding research in the area of multi-agent systems and their application to problems of societal significance. I know Tambe's work partially from his student Amulya Yadav whom I interviewed to the assistant professor position at Penn State IST. He is well-known for his work on connecting AI with social goods.
The classic paper award was conferred to Prem Melville et al. for their 2002 AAAI paper on "Content-boosted collaborative filtering for improved recommendations" (cited by 1621 times on Google Scholar). This work proposed the collaborative filtering idea on recommendation systems, which is currently a classic textbook algorithm for recommendation systems.
Due to the limited amount of time I spent at Hawaii, I went to 3 invited talks. 
The first was given by Cynthia Breazeal, who is the director of the personal robots group at MIT. Her presentation was on a social robot called Jibo. Different from Google home and Amazon Echo, this robot features more on social communications with people, instead of selling products and controlling devices. It was based on the Bayesian Theory of Mind Communication Framework and Bloom’s learning theory. Jino has been tested with early childhood education and fostering aging people community connection. The goal is to promote early learning with peers and treating loneliness, helplessness, and boredom. It could talk like a human, and do some simple motions, such as dancing. My personal opinion is that we should be careful when using these robots. They may be used for medical treatment but people should always be encouraged to reach people, instead of robots. 
The second was given by Milind Tambeon "AI and Multiagent Systems Research for Social Good". He divided this broad topic in 3 aspects: public safety and security, conservation, and public health. He views social problems as multiagent systems and pointed out that the key research challenge is how to optimize our limited intervention resources when interacting with other agents. Examples include conservation/wildlife protection in which they used game theory to successfully predict the poachers in national parks in Uganda, homeless youth shelters in Los Angeles (this is Amulya's work), and scheduling patrol scheduling using game theory. 
The last one was given by the world-famous Deep Learning expert Ian Goodfellow, Senior Staff Research Scientists of Google AI, and the author of the widely used Deep Learning book. His talk was on "Adversarial Machine Learning" -- of course he invented Generative Adversarial Network(GAN). He described the prosperity of machine learning as a Cambrian Explosion, and gave applications of GAN in security, model-based optimization, reinforcement learning, extreme reliability, label efficiency, domain adaptation, fairness, accountability, transparency, and finally neuroscience. His current research focuses on designing extremely reliable systems used in autonomous vehicles, air traffic control, surgery robots, and medical diagnosis, etc. A lot of his data is images. 
There are too many sessions and I was interested in many of them but I finally chose to focus on the NLP sessions. The paper titles can be found from the conference website. Most NLP papers use AI techniques to deal with fundamental NLP problems such as representation learning, sentence-level embedding, entity, and relation extraction. I summarize what I learned below:
(1) GAN, attentive models, and Reinforce Learning (RL) are gaining more attention, especially the latter. For example, RL is used to learn embed sentences using attentive recursive trees(Jiaxin Shi et al.; Tsinghua University). RL is used to build a hierarchical framework for relation extraction(Takanobu et al. Tsinghua University. Attentive GAN was used to generate responses of chatbot (Yu Wu et al. Beihang University). RL is used to generate topically coherent visual stories (Qiuyuan Huang et al. MSR). Deep neural networks are still popular but not that popular in NLP tasks. 
(2) Zero-shot learning became a popular topic. Zero-shot learning means learning without any instances. For example, Lee and Jha (MSR) presented Zero-shot Adaptive Transfer for Conversational Language UnderstandingShruti Rijhwani (CMU) presented Zero-Shot Neural Transfer for Cross-Lingual Entity Linking
(3) Entity and relation extraction, one of the fundamental tasks in NLP is still not well-solved. People are approaching this problem in different ways, but it seems that joint extraction is better than dealing with them separately. The model proposed in Cotype by Xiang Ren et al. has become a baseline. New baselines are proposed, which are better, though the boost is marginal. For example, Rijhwani et al. (CMU) proposed Zero-shot neural transfer for cross-lingual entity linking. Changzhi Sun & Yuanbin Wu (East China Normal University) proposed Distantly Supervised Entity Relation Extraction with Adapted Manual Annotations.  Gupta et al. (Siemens) proposed Neural relation extraction within and across sentence boundaries
(4) Most advances in QA systems are still limited to answer selection task. Generating NL is still a very difficult task even with DNN. There is an interesting work by Lili Yao (Peking University) in which they generate short stories by a given keyphrase. But the code is not ready to be released. 

(5) There is one paper talking about a framework for question generation from phrase extraction by Siyuan wang et al. (Fudan University), which is related to my recent research in summarization. However, the input of the system is single sentences, rather than paragraphs, not to mention full text. So it is not directly applicable to our work. Some session names look interesting in general, but the papers are not very interesting as they usually focus on a very narrow topic. 

The IAAI session I attended featured 5 presentations. 
·       Early-stopping of scattering pattern observation with Bayesian Modeling by Asahara et al. (Japan). This is a good example to apply AI with physics. They are basically using unsupervised learning to predict the neutron scattering patterns. The goal was to reduce the cost to build equipment to generate powerful neutron beams.
·       Novelty Detection for Multispectral Images with Application to Planetary Exploration by Hannah R. Kerner et al. (ASU). These people are designing AI techniques to facilitate fast decision making for the Mars project.
·       Expert Guided Rule Based Prioritization of Scientifically Relevant Images for Downlinking over Limited Bandwidth from Planetary Orbiters by Srija Chakraborty (ASU).
·       Ours on Cleaning Noisy and Heterogeneous Metadata for Record Linking across Scholarly Big Datasets. I received a comment and a question. The comment given by an audience named Chris Lesner refers me to the MinHashing Shingles, which can be another potential solution to the scalability problem. The question came from a person on understanding the entity matching problem. I also got the name card of Diane Oyen, who is a staff scientist in the Information Science group at Los Alamos National Lab. She has some interesting problems to detect plagiarisms that we can potentially collaborate. 
·       A fast machine learning workflow for rapid phenotype prediction from whole shotgun metagenomes by Anna Paola Carrieri et al. (IBM)
One impression to me about the conference is that most presentations are terrible. This is agreed by Dr. Huan Sun, assistant professor at OSU. This is the disadvantage of sending students to the conference. It is much more beneficial to students than the audiences. The slides are not readable, the voice of presenters is low, and many presenters do not spend enough time explaining key results, leaving essentially no room for high-quality questions: the audiences just do not understand what they were talking! In particular, although Chinese scholars got many papers accepted and presented, most do not present well. Most audiences were swiping smartphones rather than listening to talks. 
Another impression is that the conference is too big! There is virtually little chance to get enough coverage and meet with speakers. I was lucky to meet with my old colleagues at Penn State: Madian Khabsa (now at Apple), Shuting Wang (now at Facebook), and Alex Ororbia II (now at RIT). I also meet with Prof. Huan Liu at ASU and had lunch with a few new friends at Apple. 
Overall, the conference was well organized, although the program arrived very late, which delayed my trip planning. The location had very good scenes, except that it is too expensive. The registration is almost $1k but lunch is not covered! Hawaii is very beautiful. I enjoyed the Waikiki beach and came across a rainbow. 

Jian Wu


























No comments:

Post a Comment