Summary

Many of the successes in deep learning build upon rich supervision. Reinforcement learning (RL) is no exception to this: algorithms for locomotion, manipulation, and game playing often rely on carefully crafted reward functions that guide the agent. But defining dense rewards becomes impractical for complex tasks. Moreover, attempts to do so frequently result in agents exploiting human error in the specification. To scale RL to the next level of difficulty, agents will have to learn autonomously in the absence of rewards.

We define task-agnostic reinforcement learning (TARL) as learning in an environment without rewards to later quickly solve down-steam tasks. Active research questions in TARL include designing objectives for intrinsic motivation and exploration, learning unsupervised task or goal spaces, global exploration, learning world models, and unsupervised skill discovery. The main goal of this workshop is to bring together researchers in RL and investigate novel directions to learning task-agnostic representations with the objective of advancing the field towards more scalable and effective solutions in RL.

We invite paper submissions in the following categories to present at the workshop:

Speakers

Pierre-Yves Oudeyer
Inria
Research Director
Chelsea Finn
Google, Berkeley, Stanford
Assistant Professor
Neil Bramley
University of Edinburgh
Assistant Professor
Doina Precup
McGill, MILA, DeepMind
Professor
Martin Riedmiller
DeepMind
Research Scientist
Katja Hofmann
Microsoft Research
Senior Researcher

Dates

Event Date
Submission deadline 29 March 2019 (11:59 pm AOE)
Notifications 23 April 2019
Camera ready 04 May 2019 (11:59 pm AOE)
Workshop 06 May 2019

Sponsors

We thank our sponsors for making this workshop possible:

Schedule

Please find our recorded livestream here: https://slideslive.com/iclr/iclr-2019-r09-taskagnostic-reinforcement-learning-tarl

Time Event
09:45 Opening
09:50 Invited talkMartin Riedmiller: Internal Reward Predicates for Task Agnostic RL
10:20 Lightning talks
10:30 Posters + Coffee break
11:00 Invited talkChelsea Finn: What can we Learn from Unlabeled Interaction?
11:30 Invited talkDoina Precup: Generalized Value Functions: Knowledge Representation for RL agents
12:00 Contributed talkVitchyr Pong: Skew-Fit: State-Covering Self-Supervised Reinforcement Learning
12:15 Contributed talkLisa Lee: State Marginal Matching with Mixtures of Policies
12:30 Invited talkKatja Hofmann: Directions and Challenges in Multi-Task Reinforcement Learning
13:00 Lunch break
15:20 Contributed talkCorey Lynch: Learning Latent Plans from Play Data
15:35 Contributed talkHugo Caselles-Dupré: Symmetry-Based Disentangled Representation Learning requires Interaction with Environments
15:50 Lightning talks
16:00 Posters + Coffee break
16:30 Invited talkPierre-Yves Oudeyer: Curiosity-driven exploration of learned goal spaces: Discovering independently controllable features
17:00 Invited talkNeil Bramley: Intuitive experimentation in human and artificial agents
17:30 Panel discussion
18:00 End

Submissions

Papers should be in anonymous ICLR style and up to 5 pages, with an unlimited number of pages for references and appendix. Accepted papers will be presented during our poster session and made available on the workshop website. Selected authors will be offered a 10 min talk at the workshop. This does not constitute an archival publication and no formal workshop proceedings will be made available, meaning contributors are free to publish their work at journals or conferences.

Submissions are now closed. Thanks to everyone for submitting!

Paper portal: https://cmt3.research.microsoft.com/tarl2019

Accepted papers

We’ve received many interesting and high-quality submissions, out of which we accepted 24 papers to be presented at our poster sessions. The order below was selected randomly and the PDFs will be made available here shortly.

State Marginal Matching with Mixtures of Policies

Lisa Lee, Emilio Parisotto, Ben Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Reinforcement Learning with Unknown Reward Functions

Ben Eysenbach, Jacob Tyo, Shixiang Gu, Ruslan Salakhutdinov, Zachary Lipton, Sergey Levine

Planning with Goal-Conditioned Policies

Soroush Nasiriany, Vitchyr Pong, Sergey Levine

A Self-Supervised Method for Mapping Instructions to Robot Policies

Hsin-Wei Yu, Po-Yu Wu, Chih-An Tsao, You-An Shen, Shih-Hsuan Lin, Zhang-Wei Hong, Yi-Hsiang Chang, Chun-Yi Lee

Dynamics-Aware Unsupervised Skill Learning

Archit Sharma, Shixiang Gu, Karol Hausman, Sergey Levine, Vikash Kumar

Hierarchical Policy Learning is Sensitive to Goal Space Design

Zach Dwiel, Madhavun Candadai, Mariano Phielipp, Arjun Bansal

Unsupervised Discovery of Decision States through Intrinsic Control

Nirbhay Modhe, Mohit Sharma, Prithvijit Chattopadhyay, Abhishek Das, Devi Parikh, Dhruv Batra, Ramakrishna Vedantam

Insights on Visual Representations for Embodied Navigation Tasks

Erik Wijmans, Julian Straub, Dhruv Batra, Judy Hoffman, Ari Morcos

Variational State Encoding as Intrinsic Motivation in Reinforcement Learning

Martin Klissarov, Riashat Islam, Khimya Khetarpal, Doina Precup

Learning Latent Plans from Play Data

Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet

Exploration via Sample-Efficient Subgoal Design

Yijia Wang, Brian Bell, Matthias Poloczek, Daniel Jiang

Exploration via Flow-Based Intrinsic Rewards

Hsuan-Kung Yang, Po-Han Chiang, Min-Fong Hong, Chun-Yi Lee

KeyIn: Discovering Subgoal Structure with Keyframe-based Video Prediction

Karl Pertsch, Oleh Rybkin, Jingyun Yang, Konstantinos Derpanis, Joseph Lim, Kostas Daniilidis, Andrew Jaegle

Task-Agnostic Constraining in Average Reward POMDPs

Guido Montufar, Johannes Rauh, Nihat Ay

Learning Robotic Manipulation Through Visual Planning and Acting

Angelina Wang, Thanard Kurutach, Kara Liu, Aviv Tamar, Pieter Abbeel

Automatic Curriculum Generation Via Task Perturbations For Reinforcement Learning

Srinivas Venkattaramanujam, Riashat Islam, Doina Precup

Symmetry-Based Disentangled Representation Learning requires Interaction with Environments

Hugo Caselles-Dupré, David Filliat, Michael Garcia Ortiz

Control What You Can: Intrinsically Motivated Reinforcement Learner with Task Planning Structure

Sebastian Blaes, Marin Vlastelica, Jia-Jie Zhu, Georg Martius

Skew-Fit: State-Covering Self-Supervised Reinforcement Learning

Vitchyr Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl, Sergey Levine

Unsupervised Representation Learning by Latent Plans

Ge Yang, Amy Zhang, Ari Morcos, Roberto Calandra

Organizers

Danijar Hafner
Google Brain
University of Toronto
Amy Zhang
Facebook AI Research
McGill University
Ahmed Touati
University of Montreal
Deepak Pathak
UC Berkeley
Frederik Ebert
UC Berkeley
Rowan McAllister
UC Berkeley
Roberto Calandra
Facebook AI Research
Marc G. Bellemare
Google Brain
McGill University
Raia Hadsell
DeepMind
Alessandro Lazaric
Facebook AI Research
Joelle Pineau
Facebook AI Research
McGill University

For questions, please contact us at: taskagnosticrl@gmail.com