Loading…
This event has ended. View the official site or create your own event → Check it out
This event has ended. Create your own
View analytic
Saturday, December 12 • 08:30 - 18:30
Reasoning, Attention, Memory (RAM) Workshop

Sign up or log in to save this to your schedule and see who's attending!

Motivation and Objective of the Workshop

In order to solve AI, a key component is the use of long term dependencies as well as short term context during inference, i.e., the interplay of reasoning, attention and memory. The machine learning community has had great success in the last decades at solving basic prediction tasks such as text classification, image annotation and speech recognition. However, solutions to deeper reasoning tasks have remained elusive. Until recently, most existing machine learning models have lacked an easy way to read and write to part of a (potentially very large) long-term memory component, and to combine this seamlessly with inference. To combine memory with reasoning, a model must learn how to access it, i.e. to perform *attention* over its memory. Within the last year or so, in part inspired by some earlier works [8, 9, 14, 15, 16, 18, 19], there has been some notable progress in these areas which this workshop addresses. Models developing notions of attention [12, 5, 6, 7, 20, 21] have shown positive results on a number of real-world tasks such as machine translation and image captioning. There has also been a surge in building models of computation which explore differing forms of explicit storage [1, 10, 11, 13, 17]. For example, recently it was shown how to learn a model to sort a small set of numbers [1] as well as a host of other symbolic manipulation tasks. Another promising direction is work employing a large long-term memory for reading comprehension; the capability of somewhat deeper reasoning has been shown on synthetic data [2], and promising results are starting to appear on real data [3,4].

In spite of this resurgence, the research into developing learning algorithms combining these components and the analysis of those algorithms is still in its infancy. The purpose of this workshop is to bring together researchers from diverse backgrounds to exchange ideas which could lead to addressing the various drawbacks associated with such models leading to more interesting models in the quest for moving towards true AI. We thus plan to focus on addressing the following issues:

* How to decide what to write and what not to write in the memory.
* How to represent knowledge to be stored in memories.
* Types of memory (arrays, stacks, or stored within weights of model), when they should be used, and how can they be learnt.
* How to do fast retrieval of relevant knowledge from memories when the scale is huge.
* How to build hierarchical memories, e.g. employing multiscale notions of attention.
* How to build hierarchical reasoning, e.g. via composition of functions.
* How to incorporate forgetting/compression of information which is not important.
* How to properly evaluate reasoning models. Which tasks can have a proper coverage and also allow for unambiguous interpretation of systems' capabilities? Are artificial tasks a convenient way?
* Can we draw inspiration from how animal or human memories are stored and used?

The workshop will devote most of the time in invited speaker talks, contributed talks and panel discussion. In order to move away from a mini-conference effect we will not have any posters. To encourage interaction a webpage will be employed for realtime updates, also allowing people to post questions before or during the workshop, which will be asked at the end of talks or during the panel, or can be answered online.

Please see our external page for more information: http://www.jaseweston.com/ram



8:20 - 8:30 Introduction

8:30 - 10:00 Session 1

8:30 - 9:05 Invited talk (35min): “How to learn an algorithm” Juergen Schmidhuber, IDSIA.

9:05 - 9:40 Invited talk (35min): "From Attention to Memory and towards Longer-Term Dependencies" Yoshua Bengio, University of Montreal.

9:40 - 10:00 Contributed talk (20min): “Generating Images from Captions with Attention” Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov (University of Toronto).

10:00 - 10:30 Coffee break

10:30 - 12:30 Session 2

10:30 - 11:05 Invited talk (35min): Alex Graves, Google Deepmind.

11:05 - 11:40 Invited talk (35min): “Exploiting cognitive constraints to improve machine-learning memory models” Mike Mozer, University of Colorado.

11:40 - 12:00 Contributed talk (20min): “Structured Memory for Neural Turing Machines” Wei Zhang, Yang Yu, Bowen Zhou (IBM Watson).

12:00 - 12:20 Contributed talk (20min): “Towards Neural Network-based Reasoning” Baolin Peng, The Chinese University of Hong Kong; Zhengdong Lu, Noah's Ark Lab, Huawei Technologies; Hang Li, Noah's Ark Lab, Huawei Technologies; Kam-Fai Wong, The Chinese University of Hong Kong.

12:20 - 12:25 Lightning talk (5min): “Learning to learn neural networks” Tom Bosc, Inria.

12:25 - 12:30 Lightning talk (5min): “Evolving Neural Turing Machines” Rasmus Boll Greve, Emil Juul Jacobsen, Sebastian Risi (IT University of Copenhagen).

12-30 - 2:30 Lunch break

2:30 - 4:30 Session 3

2:30 - 3:05 Invited talk (35min): "Neural Machine Translation: Progress Report and Beyond" Kyunghyun Cho, New York University.

3:05 - 3:40 Invited talk (35min): “Sleep, learning and memory: optimal inference in the prefrontal cortex” Adrien Peyrache, New York University.

3:40 - 4:00 Contributed talk (20min): “Dynamic Memory Networks for Natural Language Processing” Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Richard Socher (MetaMind).

4:00 - 4:20 Contributed talk (20min): “Neural Models for Simple Algorithmic Games” Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus (Facebook AI Research).

4:20 - 4:25 Lightning talk (5min): “Chess Q&A : Question Answering on Chess Games” Volkan Cirik, Louis-Philippe Morency, Eduard Hovy (CMU).

4:25 - 4:30 Lightning talk (5min): “Considerations for Evaluating Models of Language Understanding and Reasoning” Gabriel Recchia, University of Cambridge.

4:30 - 5:00 Coffee break

5:00 - 6:30 Session 4

5:00 - 5:35 Invited talk (35min): ““A Roadmap towards Machine Intelligence” Tomas Mikolov, Facebook AI Research.

5:35 - 5:55 Contributed talk (20min): “Learning Deep Neural Network Policies with Continuous Memory States” Marvin Zhang, Zoe McCarthy, Chelsea Finn, Sergey Levine, Pieter Abbeel (UC Berkeley).

5:55 - 6:30 Invited talk (35min): “The Neural GPU and the Neural RAM machine” Ilya Sutskever, Google.


http://www.jaseweston.com/ram

Saturday December 12, 2015 08:30 - 18:30
510 ac

Attendees (22)