The growing capabilities of learning-based methods in control and robotics have precipitated a shift in the design of software for autonomous systems. Recent successes fuel the hope that robots will increasingly perform varying tasks working alongside humans in complex, dynamic environments. However, the application of learning approaches to real-world robotic systems has been limited because real-world scenarios introduce challenges not arising in simulation.
In this workshop, we aim to identify and tackle the main challenges to learning on real robotic systems. First, many current machine learning methods rely on large quantities of labeled data. While raw sensor data is available at high rates, the required variety is hard to obtain and the human effort to annotate or design reward functions is an even larger burden. Second, algorithms must guarantee some measure of safety and robustness to be deployed in real systems that interact with property and people. Instantaneous reset mechanisms, as common in simulation to recover from even critical failures, present a great challenge to real robots. Third, the real world is significantly more complex and varied than curated datasets and simulations. Successful approaches must scale to this complexity, be able to adapt to novel situations and recover from mistakes.
As a community, we are exploring a wide range of solutions to each of these challenges. To explore the limits of different directions, we aim to address in particular questions about the trade-offs and potential necessity of particular design aspects via included panel discussion as well as the invited presentations:
- Transfer learning (simulation - sim2real, multitask, different domains, etc)
- Explicit methods for planning, prediction, and uncertainty modelling
The primary focus of the submission lies on tackling these challenges resulting from operation in the real world. We will encourage submissions that experiment on physical systems, and specifically consider algorithmic developments aimed at tackling the challenges presented by physical systems. We believe this focus on real-world application will bring together a cross-section of researchers working on different areas of research for a fruitful exchange of ideas including our invited speakers.
Important dates
Submission deadline (extended): 13 September 2019 (Anywhere on Earth)Notification: 01 October 2019Camera ready: 01 December 2019- Workshop: 14 December 2019
Invited Speakers
- Marc Deisenroth (Imperial College London / Prowler.io)
- Nima Fazeli (University of Michigan, Ann Arbor)
- Raia Hadsell (DeepMind)
- Edward Johns (Imperial College London)
- Takayuki Osa (Kyushu Institute of Technology)
- Angela Schoellig (University of Toronto)
Organizers
- Sanket Kamthe (Imperial College London)
- Kate Rakelly (University of California, Berkeley)
- Markus Wulfmeier (DeepMind)
- Roberto Calandra (Facebook AI Research)
- Danica Kragic (Royal Institute of Technology, KTH)
- Stefan Schaal (Google)
Schedule
09:00 | Introduction and opening remarks |
09:15 | Invited talk - Marc Deisenroth |
09:45 | Coffee break |
10:30 | Poster session 1 |
11:15 | Contributed talk - Laura Smith presenting AVID: Translating Human Demonstrations for Automated Learning |
11:30 | Invited talk - Takayuki Osa |
12:00 | Lunch break |
13:30 | Invited talk - Raia Hadsell |
14:00 | Invited talk - Nima Fazeli |
14:30 | Poster session 2 |
15:30 | Coffee break |
16:00 | Contributed talk - Michelle Lee and Carlos Florensa presenting Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning |
16:15 | Invited talk - Angela Schoellig |
16:45 | Invited talk - Edward Johns |
17:15 | Panel discussion |
18:00 | End |
Accepted Papers
Accepted papers are listed in alphabetical order. All papers will be presented in poster format during both poster sessions.
- Deep Reinforcement Learning for Biomimetic Touch: Learning to Type Braille
Alex Church, John Lloyd, Raia Hadsell, Nathan Lepora - Improving Model-Based Reinforcement Learning via Model-Augmented Pathwise Derivative
Ignasi Clavera, Yao(Violet) Fu, Pieter Abbeel - Mutual Information Maximization for Robust Plannable Representations
Ignasi Clavera, Yiming Ding, Pieter Abbeel - Active Robot Imitation Learning with Autoencoders and Imagined Rollouts
Norman Di Palo, Edward Johns - Self-Supervised Correspondence in Visuomotor Policy Learning
Peter R Florence, Lucas Manuelli, Russ Tedrake - Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Carlos Florensa, Michelle Lee, Jonathan Tremblay, Nathan Ratliff, Animesh Garg, Fabio Ramos, Dieter Fox - Zero-Shot Reinforcement Learning with Deep Attention Convolutional Neural Networks
Sahika Genc, Sunil Mallya, Sravan Babu Bodapati, Tao Sun, Yunzhe Tao - H∞ Model-free Reinforcement Learning with Robust Stability Guarantee
Minghao Han, Lixian Zhang, Yuan Tian, Jun Wang, Wei Pan - Towards Object Detection from Motion
Rico Jonschkowski, Austin Stone - Towards More Sample Efficiency in Reinforcement Learning with Data Augmentation
Yijiong Lin, Jiancong Huang, Matthieu Zimmer, Yisheng Guan, Juan Rojas, Paul Weng - Hierarchical Foresight: Self-Supervised Learning of Long-Horizon Tasks via Visual Subgoal Generation
Suraj Nair, Chelsea Finn - AVID: Translating Human Demonstrations for Automated Learning
Laura M Smith, Nikita Dhawan, Marvin Zhang, Pieter Abbeel, Sergey Levine - Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration
Simon B Stepputtis, Joseph Campbell, Mariano Phielipp, Chitta Baral, Heni Ben Amor - VILD: Variational Imitation Learning with Diverse-quality Demonstrations
Voot Tangkaratt, Bo Han, Mohammad Emtiyaz Khan, Masashi Sugiyama - Human-Robot Collaboration via Deep Reinforcement Learning of Real-World Interactions
Jonas Tjomsland, Ali Shafti, Aldo Faisal - Thinking While Moving: Deep ReinforcementLearning in Concurrent Environments
Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog - Morphology-Agnostic Visual Robotic Control
Brian Yang, Dinesh Jayaraman, Glen Berseth, Alexei A Efros, Sergey Levine - Enhanced Adversarial Strategically-Timed Attacks on Deep Reinforcement Learning
Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, I-Te Hung, Yi Ouyang, Xiaoli Ma - SwarmNet: Towards Imitation Learning of Multi-Robot Behavior with Graph Neural Networks
Siyu Zhou, Mariano Phielipp, Jorge Sefair, Sara Walker, Heni Ben Amor
Program Committee
We would like to thank the program committee for shaping the excellent technical program. In alphabetical order they are:
Abbas Abdolmaleki, Hany Abdulsamad, Andrea Bajcsy, Feryal Behbahani, Djalel Benbouzid, Michael Bloesch, Caterina Buizza, Roberto Calandra, Nutan Chen, Misha Denil, Coline Devin, Marco Ewerton, Walter Goodwin, Tuomas Haarnoja, Roland Hafner, James Harrison, Karol Hausman, Edward Johns, Ashvin Nair, Takayuki Osa, Simone Parisi, Akshara Rai, Nemanja Rakicevic, Dushyant Rao, Siddharth Reddy, Apoorva Sharma, Johannes A. Stork, Li Sun, Filipe Veiga, Ruohan Wang, Ruohan Wang, Rob Weston, Yizhe Wu
Contacts
For any further questions, you can contact us at neuripswrl2019@robot-learning.ml
Sponsors
We are very thankful to our corporate sponsors for enabling us to provide student travel grants!