single-news

News

Four new papers accepted at AAAI 2018 on topics ranging from recurrent models to Language Grounding and Multimodal Fusion

December 3, 2017

Members of the MultiComp Lab had four papers accepted at the Association of Advancement of Artificial Intelligence (AAAI 2018 https://aaai.org/Conferences/AAAI-18/) in New Orleans, Lousiana, USA.

“Memory Fusion Network for Multi-view Sequential Learning” by Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria and Louis-Philippe Morency studies the synchronization of multi-view sequences using a multi-view gated memory. The model achieves state of the art results on 7 publicly available datasets spanning multimodal sentiment analysis, emotion recognition and personality traits recognition.

“Multi-attention Recurrent Network for Human Communication Comprehension” by Amir Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, Prateek Vij and Louis-Philippe Morency introduces the Multi-attention Recurrent Network, a neural framework that models both view-specific and cross-view dynamics in multimodal human communication.

“Lattice Recurrent Unit: Improving Convergence and Statistical Efficiency for Sequence Modeling” by Chaitanya Ahuja, and Louis-Philippe Morency introduce a new recurrent unit which has two distinct flows of information along time and depth. This reduces the effect of vanishing gradients along the depth as well, while giving a boost to the computational and statistical efficiency.

“Using Syntax to Ground Referring Expressions in Natural Images” by Volkan Cirik, Taylor Berg-Kirkpatrick, and Louis-Philippe Morency explores the use of syntax for referring expression recognition – the task of identifying the target object in an image referred to by a natural language expression. The proposed model GroundNet effectively integrates syntax to achieve the balance between accurately identifying both the target object and supporting objects.